exceptions-popl

exceptions-popl - Type-based analysis of uncaught...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Type-based analysis of uncaught exceptions Franois Pessaux c Xavier Leroy INRIA Rocquencourt Abstract This paper presents a program analysis to estimate uncaught exceptions in ML programs. This analysis relies on unification-based type inference in a non-standard type system, using rows to approximate both the flow of escaping exceptions (a la effect systems) and the flow of result values (a la control-flow analyses). The resulting analysis is efficient and precise; in particular, arguments carried by exceptions are accurately handled. 1 Introduction Many modern programming languages such as Ada, Modula-3, ML and Java provide built-in support for exceptions: raising an exception at some program point transfers control to the nearest handler for that exception found in the dynamic call stack. Exceptions provide safe and flexible error handling in applications: if an exception is not explicitly handled in a function by the programmer, it is automatically propagated upwards in the call graph until a function that "knows" how to deal with the exception is found. If no handler is provided for the exception, program execution is immediately aborted, thus pinpointing the unexpected condition during testing. This stands in sharp contrast with the traditional C-style reporting of error conditions as "impossible" return values (such as null pointers or the integer -1): in this approach, the programmer must write significant amount of code to propagate error conditions upwards; moreover, it is very easy to ignore an error condition altogether, often causing the program to crash much later, or even complete but produce incorrect results. The downside of using exceptions for error reporting and as a general non-local control structure is that it is very easy to forget to catch an exception at the right place, i.e. to handle an error condition. ML compilers generate no errors or warnings in this case, and the programming mistake will only show up during testing. Exhaustive testing of applications is difficult, and even more so in the case of error conditions that are infrequent or hard to reproduce. Our experience with large ML applications is that uncaught exceptions are the most frequent mode of failure. Authors' address: INRIA Rocquencourt, projet Cristal, B.P. 105, 78153 Le Chesnay, France. E-mail: Francois.Pessaux@inria.fr, Xavier.Leroy@inria.fr. This work has been partially supported by CNET, France Tlcom. ee To appear in the 26th ACM conference on Principles of Programming Languages, January 1999. Copyright c 1999 by the Association for Computing Machinery. 1 To address this issue, languages such as Modula-3 and Java require the programmer to declare, for each function or method, the set of exceptions that may escape out of it. Those declarations are then checked statically during type-checking by a simple intraprocedural analysis. This forces programmers to be conscious of the flow of exceptions through their programs. Declaring escaping exceptions in functions and method signatures works well in first-order, monomorphic programs, but is not adequate for the kind of higher-order, polymorphic programming that ML promotes. Consider the map iterator on lists. In Modula-3 or Java, the programmer must declare a set E of exceptions that the function argument to map may raise; map, then, may raise the same exceptions E. But E is fixed arbitrarily, thus preventing map from being applied to functions that raise exceptions not in E. The genericity of map can be restored by taking for E the set of all possible exceptions, but then the precision of the exception analysis is dramatically decreased: all invocations of map are then considered as potentially raising any exception. (Similar problems arise in highly object-oriented Java programs using e.g. container classes and iterators intensively.) To deal properly with higher-order functions, a very rich language for exception declarations is required, including at least exception polymorphism (variables ranging over sets of exceptions) and arbitrary unions of exception sets. (See section 2 for a more detailed discussion.) We believe that such a complex language for declaring escaping exceptions is beyond what programmers are willing to put up with. The alternative that we follow in this paper is to infer escaping exceptions from unannotated ML source code. In other terms, we view the problem of detecting potentially uncaught exceptions as a static debugging problem, where static analyses are applied to the programs not to make them faster via better code generation, but to make them safer by pinpointing possible run-time failures. This approach has several advantages with respect to the Modula-3/Java approach: it blends better with ML's type inference; it does not change the language and supports the static debugging of "legacy" applications; it allows the use of complex approximations of exception sets, as those need not be written by the programmer (within reason the results of the analysis must still be understandable to the programmer). Finally, the exception inference needs not be fully compatible with the ML module system: a whole program analysis can be considered (again within reason analysis time should remain practical). Several exception analyses for ML have been proposed [8, 36, 35, 3, 4], some based on effect systems, some on control-flow analyses, some on combinations of both (see section 6 for a detailed discussion). The analysis presented in this paper attempts to combine the efficiency of effect systems with the precision of flow analyses. It is based on unification and non-standard type inference algorithms that have excellent running time and we hope should scale well to large applications. At the same time, our analysis is still fairly precise; in particular, it approximates not only the names of the escaping exceptions, but also the arguments they carry a feature that is essential to analyze precisely many existing ML programs. This constitutes the main technical contribution of this paper: integrate in the same unification-based framework both approximation of exception effects in the style of effect systems [28], and approximation of sets of values computed at each program point in the style of flow analyses and soft typing [26, 32]. Finally, our analysis has been implemented to cover the whole Objective Caml language not only core ML, but also datatypes, objects, and the module system. We present some preliminary experimental results obtained with our implementation. The remainder of this paper is organized as follows. Section 2 lists the main requirements for an ML exception analysis. Section 3 presents the non-standard type system we use for exception analysis. Extension to the full Objective Caml language is discussed in section 4; experimental results obtained with our implementation, in section 5; and related work, in section 6. Concluding remarks can be found in section 7. 2 Design requirements any exception can be raised by applying a function retrieved from the structure. 2.2 Handling exceptions as first-class values In ML and Java, exceptions are first-class values: exception values can be built in advance and passed through functions before being raised. Consider for instance the following contrived example: let test = exn. try raise(exn) with E 0 The exception behavior of this function is that test exn raises the exception contained in the argument exn, except when exn is actually the exception E, in which case no exception escapes out of test. We seek exception analyses precise enough to capture this behavior. It is true that the first-class character of exception values is rarely, if ever, used in actual ML programs. However, there is one important idiom where an exception value appears: finalization. Consider: let f = x. try g(x) with E 0 | exn finalization code; raise(exn) Assuming g can raise exceptions E and E', the exception analyzer should recognize that the exn exception variable can only take the value E', thus the raise(exn) that reraises the exception after finalization can only raise E', and so does the function f itself. 2.3 Keeping track of exception arguments In this section, we list the main requirements for an effective exception analysis for ML, and show that they go much beyond what can be expressed by exception declarations in Modula-3 or Java. Existing exception analyses have addressed some of these requirements, but none addresses all. 2.1 Handling higher-order functions precisely The exception behavior of higher-order functions depends on the exceptions that can be raised by their functional arguments. A form of polymorphism over escaping exceptions is thus needed to analyze higher-order functions precisely. Consider the map iterator over lists mentioned in introduction. An application map f l may raise whatever exception the f argument may raise. Writing for the annotated type of functions from type to type whose set of potentially escaping exception is , the behavior of map is captured by the following annotated type scheme: map : , , . ( ) ( list list) where , range over types and ranges over sets of exceptions. In general, the escaping exceptions for a higher-order function are combinations 1 . . .n {C1 ; . . . ; Cn } where the i are variables representing the escaping exceptions for functional arguments and the Cj are exception constants. For instance, we have the following annotated type for function composition f.g.x.f (g(x)): , , , , . ( ) ( ) - Given the frequent use of higher-order functions in ML programs, an exception analysis for ML must handle them with precision similar to what the annotated types above suggest. Similar issues arise when functions are stored into data structures such as lists or hash tables (as in callback tables for instance). The exception analysis should keep track of the union of the exceptions that can be raised by functions contained in the structure. It is not acceptable to say that 2 ML exceptions can optionally carry arguments, just like all other data type constructors. This argument can be tested in the with part of an exception handler, using patternmatching on the exception value, so that only certain exceptions with certain arguments are caught. Consider the following example: exception Failure of string let f = x. if ... then ... else raise(Failure "f") let g = x. try f(x) with Failure "f" 0 An exception analysis that only keeps track of the exception head constructors (i.e. Failure above) but not of their arguments (i.e. the string "f" above) fails to analyze this example with sufficient precision: the analysis records that function f may raise the Failure exception, hence it considers that the application f(x) in g may raise Failure with any argument. Since the exception handler traps only Failure "f", the analyzer concludes that g may raise Failure, while in reality no exception can escape g. This lack of precision can be brushed aside as "unimportant" and "bad programming style anyway". Indeed, the programmer should have declared a specific constant exception Failure_f to report the error in f, rather than rely on the general-purpose Failure exception. However, code fragments similar to the example above appear in legacy Caml applications that we would like to analyze. More importantly, there are also legitimate uses of exceptions with parameters. For instance, the Caml interface to Unix system calls uses the following scheme to report Unix error conditions: type unix_error = EACCES | ENOENT | ENOSPC | ... (* enumerated type with 67 constructors representing Unix error codes *) exception Unix_error of unix_error This allows user code to trap all Unix errors at once (try ... with Unix_error(_) -> ...), and also to trap particular errors (try ... with Unix_error(ENOENT) -> ...). Replacing Unix_error by 67 distinct exceptions, one for each error code, would make the former very painful. It is desirable that the exception analysis be able to show that certain Unix_error exceptions with arguments representing common errors (e.g. Unix_error(ENOENT), "no such file") are handled in the program and thus do not escape, while we can accept that other Unix_error exceptions representing rare errors are not handled in the program and may escape. The problem with exception arguments is made worse by the availability (in the Caml standard library at least) of predefined functions to raise general-purpose exceptions such as Failure above. Indeed, the example with Failure above is more likely to appear under the following form: exception Failure of string let failwith = msg. raise(Failure msg) let f = x. if ... then ... else failwith("f") let g = x. try f(x) with Failure "f" 0 Precise exception analysis in this example requires tracking the string constant "f" not only when it appears as immediate argument to the Failure exception constructor, but also when it is passed to the function failwith. Hence the exception analysis must also include some amount of data flow analysis, not limited to exception values. 2.4 Running faster than control-flow analyses For these reasons, we decided to abandon analyses based on CFA or more generally set inclusion constraints, and settled for less precise but faster analyses based on equality constraints and unification. 3 A type system for exception analysis In the style of effect systems [16, 28], our exception analysis is presented as a type inference algorithm for a nonstandard type system. The type system uses unified mechanisms based on row variables both to keep track of the effects (sets of escaping exceptions) of expressions and to refine the usual ML types by more precise information about the possible values of expressions. In this section, we present first the typing rules for our type system (that is, the specifications for the exception analysis), then type inference issues (the actual analysis). 3.1 The source language The source language we consider in this paper is a simple subset of ML with integers and exceptions as the only data types, the ability to raise and handle exceptions, and simplified pattern-matching. Terms: a ::= x |i | x. a | a1 (a2 ) | let x = a1 in a2 | match a1 with p a2 | x a3 | C | D(a) | try a1 with x a2 Patterns: p ::= x |i|C | D(p) identifier integer constant application abstraction the let binding pattern-matching exception constr. exception handler variable pattern constant patterns constructed pattern All the requirements we have listed so far point towards control-flow analyses in the style of Shiver's k-CFA [26] or Heintze's SBA [9]. Control-flow analyses provide an approximation of the set of values that can flow to each program point. It is entirely straightforward to extend them to approximate also the set of escaping exceptions at each program point at the same time as they approximate the set of result values. Alternatively, the exception analysis can be run as a second pass of dataflow analysis exploiting the results of control-flow analysis [35], although this results in some loss of precision, as the control flow can be determined more accurately if exception information is available. This exception analysis benefits from the relatively precise approximation of values provided by the control-flow analysis, especially as far as exception arguments are concerned. Our first implementation of an exception analyzer for Objective Caml was indeed based on control-flow analysis: 0-CFA initially, then Jagannathan and Wright's "polymorphic splitting" [12]. Our practical experience with this approach was mixed: the precision of the exception analysis was satisfactory (at least with polymorphic splitting), but the speed of the analysis left a lot to be desired. In particular, we observed quadratic behavior on several examples, indicating that the analysis would not scale easily to large programs1 . Although sophisticated techniques have been developed to speed up program analyses based of set inclusion constraints such as CFA and SBA [2, 6, 5, 19], it is still an open problem whether those analyses can scale to 100,000-line programs. 1 The complexity of 0-CFA alone is O(n3 ), where n is the size of the whole program. We did not observe cubic behavior on our tests, however. Quadratic behavior arises in the following not uncommon case: assume that a group of functions of size k = O(n) recurses over a list of m = O(n) elements given in extension in the program source. At least m iteration of the analysis is required before fixpoint is reached on the parameters and results of the functions. Since each iteration takes time proportional to k, the time of the analysis is O(n2 ). The match a1 with p a2 | x a3 performs patternmatching on the value of a1 ; if it matches the pattern p, the branch a2 is evaluated; otherwise, a3 is evaluated. Multicase pattern matchings can be expressed by cascading match expressions. The try a1 with x a2 construct evaluates a1 ; if an exception is raised, its value is bound to x and a2 is evaluated. There is no syntactic form for raising an exception; instead, we assume predefined a raise function in the environment. The try construct catches all exceptions; catching only a given exception C is performed by: try a1 with x match x with C a2 | y raise(y) The dynamic semantics for this language is given by the reduction rules in figure 1, in the style of [33]. Values, evaluation contexts, and evaluation results are defined as: Values: v ::= i | C | D(v) | x.a | raise Evaluation contexts: ::= [ ] | (a) | v() | D() | let x = in a | match with p a2 | x a3 | try with x a 3 (x.a)(v) a{x v} let x = v in a a{x v} match v with p a2 | x a3 (a2 ) if = M (v, p) is defined match v with p a2 | x a3 a3 {x v} if M (v, p) is undefined try v with x a2 v (raise v)(a) (x.a)(raise v) D(raise v) let x = raise v in a match raise v with p a2 | x a3 try raise v with x a2 [a] The pattern-matching function M (v, p): M (v, x) = {x v} M (i, i) = id M (C, C) = id M (D(v), D(p)) = M (v, p) raise v raise v raise v raise v raise v a2 {x v} [a ] if a a Figure 1: Reduction rules Evaluation results: r ::= v | raise v A result of v indicates normal termination with return value v; a result of raise v indicates an uncaught exception v. 3.2 The type algebra Sets of exceptions or integers are represented by rows similar to those used for typing extensible records [31, 22, 24]. A row is either , meaning that all values of the type are possible (we do not have any more precise information), or a sequence of row elements 1 . . . n terminated by a row variable . We impose the following equational theory on rows to express that the order of elements in a row does not matter (equation 1), and that is absorbing (equation 2): 1 ; 2 ; = 2 ; 1 ; i : Pre; = (1) (2) The type system uses the following type algebra: Type expressions: ::= type variable | int integer type | exn exception type | 1 2 function type Type schemes: ::= i , j , k . Rows: ::= row variable | all possible elements | ; the element plus whatever is in Row elements: ::= i : integer constant | C : constant exception | D( ) parameterized exception Presence annotations: ::= Pre element is present | presence variable As in effect systems, our function types 1 2 are annotated by the latent effect of the function, that is, the set of exceptions that may be raised during application of the function. In addition, the base types exn and int are also annotated by sets of exceptions and integers respectively. Those sets refine the ML types exn and int by restricting the values that an expression of type exn or int can have. The absorption equation 2 applies only to integer row elements because we intend to be used only in rows annotating the int type. (The kinding rules below enforce this invariant.) A symbol is required for base types such as int, which have an infinite (or at least very large) signature. It is not required for datatypes such as exn, which have a finite signature: a row enumerating all possible constructors can be used instead (this is discussed in section 4.1.4 below). Moreover, combining and rows containing parameterized constructors raises technical problems2 ; we prefer to avoid the difficulty by restricting to rows containing only integer elements. Rows and row variables support both polymorphism over sets and a form of set union in a unification framework. For instance, the two rows 1 ; 1 and 2 ; 2 , which informally represent the sets {1 } and {2 } respectively, unify into the row 1 ; 2 ; representing the set {1 ; 2 } via the substitution {1 (2 ; ); 2 (1 ; )}. A row element is either an integer constant i, a constant exception constructor C, or a parameterized exception constructor D( ) carrying the annotated type of its argument. To maintain crucial kinding invariants (see below), 2 The obvious absorption equation D( ); = is unsound, as it allows deductions such as D(); = = D(); , which lead to inconsistent typings. If ML had subtyping and a supertype of all types, a correct equation would be D( ); = . This equation allows to absorb any D( ) (because D( ); <: D( ); = ), but only allows expansion of into D( ); , meaning correctly that no information is available on the argument of D. 4 :: K() :: INT(S) DS / iS / :: INT(S {i}) (i : ; ) :: INT(S) wf CS / :: EXN(S {C}) (C : ; ) :: EXN(S) :: EXN(S {D}) (D( ); ) :: EXN(S) wf :: INT() int wf :: EXN() exn wf 1 wf :: EXN() 1 2 wf 2 wf Figure 2: Kinding rules the constant row elements (i and C) also carry a presence annotation, written . A presence annotation can be either Pre, meaning that the element is present in the set denoted by the row expression; or a presence variable meaning that the element is actually not present in the set denoted by the row expression, but may be considered as present in order to satisfy unification constraints. Examples: The type int[ ] denotes all integer values. The type of integer addition is 2 4 1 , 2 , 3 , 4 . int[1 ] int[3 ] int[ ] 4. A row annotating an exception type exn or a func tion type 1 2 can only contain constant or parameterized constructors C, D and must not end with . Invariants (1) and (2) are well known from earlier work on record types [24]. Invariants (3) and (4) are more unusual. They ensure a clear separation between annotations of int types (composed of integer elements and possibly ) and annotations of the exn types (composed of constructors and no ). Since absorbs only integer elements (equation 2), we do not want it to occur in rows containing exception constructors C, D. Following [24, 18], we use kinds to enforce the invariants above. Our kinds are composed of a tag (either INT or EXN) and a set of constants and constructors: ::= INT({i1 , . . . , in }) | EXN({C1 , . . . , Cp , D1 , . . . , Dq }) The constants and constructors appearing in the set part of a kind are those constants and constructors that must not appear in rows of that kind (because they already appear in elements concatenated before those rows). We assume given a global mapping K assigning kinds to row variables, and such that for each there are infinitely many variables of that kind (i.e. K -1 () is infinite). The kinding rules are shown in figure 2. They define the two judgements :: (row has kind ) and wf (type is well-formed). 3.3 The typing rules (no effects, no information known on the return value). The type scheme . int[1 : Pre; 2 : Pre; ] stands for the set {1; 2} and is the type of integer expressions that can only evaluate to 1 or to 2. A universally quantified row variable that occurs only positively in a type should be read as denoting the empty set of elements, for the same reasons that . denotes an empty set of values. The type scheme , . int[1 : ; 2 : Pre; ] stands for the set {2}. Although 1 is mentioned in the row, it should not be considered present in the set, since its presence annotation is universally quantified and occurs only positively. The type scheme , . exn[D(int[3 : Pre; 4 : Pre; ]); ] stands for the set of exceptions {D(3); D(4)}. The raise predefined function has the following type scheme: , . exn . It expresses that an application of raise never returns and raises exactly the exceptions that it receives as argument. Kinding of rows: To simplify the formulation of the typing rules and to ensure the existence of principal unifiers and principal typings, we require the following four structural invariants on rows: 1. A given integer constant or exception constructor should occur at most once in a row (for instance, (D( ); D( ); ) is not well-formed). 2. A row variable is preceded by the same set of integer constants and exception constructors in all row expressions where it occurs (for instance, we cannot have both (1 : Pre; ) and (2 : Pre; ) in the same derivation). 3. A row annotating an integer type int can only contain integer elements i. Figure 3 shows the typing rules for our system. They define the judgement E a : /, where E is the typing environment, a the term to type, the type of values that a may evaluate to, and the set of exceptions that may escape during the evaluation of a. All types appearing in the rules are assumed to be well-kinded. We assume that typing starts in the initial environment E0 = {raise : , . exn }. The rules for variables and let bindings (rules 1 and 5) are standard, except that we generalize over all three kinds of type variables. For variables as well as other language constructs that never raise exceptions (rules 1, 2, 3, 7), the component of the result is unconstrained and can be chosen as needed to satisfy equality constraints in the remainder of the typing derivation. The rules for function abstraction and application (rules 3 and 4) are the usual rules for effect systems. For abstraction, the effect of the function body becomes the latent effect of the function type. For application a1 (a2 ), we require that the same set of exceptions occurs as effect of a1 , latent effect of the function denoted by a1 , and effect of a2 . This corresponds in our unification setting to taking the union of those three effects. 5 Typing of expressions: E(x) E E x: :: INT({i}) E (1) :: EXN() 1 wf (2) E {x : 1 } E a : 2 / :: EXN() (3) a2 : / i : int[i : Pre; ]/ E a2 : / (4) 1 - p Y 2 x. a : (1 2 )/ (5) a1 : ( )/ E E a1 : 1 / E a1 : 1 / E EE E E E {x : Gen(1 , E, )} let x = a1 in a2 : / E {x : 2 } a1 (a2 ) : / p : 1 E E :: EXN() a2 : / a3 : / match a1 with p a2 | x a3 : / (7) TypeArg(D) E {x : exn[ ]} a : / :: EXN({D}) D(a) : exn[D( ); ]/ (9) (6) :: EXN() :: EXN({C}) E C : exn[C : Pre; ]/ E (8) a1 : / E a2 : / try a1 with x a2 : / Typing of patterns: x : {x : } (10) i : int[i : Pre; ] {} (11) p: E D(p) : exn[D( ); ] E Pattern subtraction: wf -xY (14) int[i : Pre; ] - i Y int[i : ; ] (15) -pY exn[D( ); ] - D(p) Y exn[D( ); ] Instantiation and generalization: i j k . iff there exists i , j , k such that = {i i , j j , k k } and Gen(, E, ) is i j k . where {i , j , k , } = F V ( ) \ (F V (E) F V ()). Figure 3: The typing rules For integer constants and exception constructors (rules 2, 7 and 8), we record the actual value of the expression in the approximation part of the type int or exn. For instance, the type of i must be of the form int[i : Pre; ], forcing i : Pre to appear in the type of the expression. In rules 8 and 13, we write TypeArg(D) for the type scheme of the argument of constructor D, e.g. TypeArg(D) = . int for an integervalued exception D. For an exception handler try a1 with x a2 (rule 9), the effect 1 of a1 is injected in the type exn[1 ] assumed for x in a2 . The most interesting rule is rule 6 for the match construct. This rule is crucial to the precision of our exception analysis. When typing match a1 with p a2 | x a3 , we want to reflect the fact that the second alternative (x a3 ) is selected only when the first alternative (p a2 ) does not match the value of a1 . In other terms, the type of values that can "flow" to x in the second alternative is not the type of the matched value a1 , but the type of a1 from which we have excluded all values matching the pattern p in the first alternative. To achieve this, rules 1417 define the pattern subtraction predicate - p Y , meaning that is a correct type for the values of type that do not match pattern p. For a variable pattern p = x (rule 14), all values match the pattern, so it is correct to assume any for the type of the non-matched values. For an integer pattern p = i (rule 15), we force to unify with int[i : Pre; ], thus exposing in the set of all possible values of type that are different from i. Then, we take = int[i : ; ] for a suitable . In particular, if that is unconstrained in the remainder of the derivation, we can take to be a fresh presence variable , thus reflecting that i is not among the possible values of type . The rules for exception patterns (rules 16 and 17) are similar. If the exception has an argument, instead of changing a presence annotation, we recursively subtract in the type of the argument of the exception. It is easy to see that the typing rules preserve the kinding invariants: if E is well-kinded and E a : /, then wf and :: EXN(). 3.4 Examples of typings i wf and j :: K(j ). (17) exn[C : Pre; ] - C Y exn[C : ; ] (16) (13) C : exn[C : Pre; ] {} (12) TypeArg(D) We now show some typings derivable in our system. These are principal typings identical to those found by our exception analyzer. Consider first a simple handler for one 6 exception C: try raise(C) with x match x with C 1 | y raise y The effect of raise(C) is C : Pre; . Hence, the type of x is exn[C : Pre; ]. Subtracting the pattern C from this type, we obtain the type exn[C : ; ] for y. Hence the effect of the whole match expression, and also of the whole try expression, is C : ; . The type is int[1 : Pre; ]. Since , and are generalizable and occur only positively, we have established that no exception escapes the expression, and that it can only evaluate to the integer 1. We now extend the previous example along the lines of the failwith example of section 2.3: let failwith = n. raise(D(n)) in try failwith(42) with x match x with D(42) 0 | y raise y We obtain the following intermediate typings: failwith : , 1 , 2 . int[1 ] - - - - - - - : Pre; 3 ]); 4 ] x : exn[D(int[42 y : exn[D(int[42 : ; 3 ]); 4 ] Thus we conclude as before that no exception escapes this expression. For a representative example of higher-order functions, consider function composition: let compose = f. g. x. f(g(x)) in compose (y. 0) (z. raise(C)) 1 The type scheme for compose is , , , , 1 , 2 . ( 2 1 ) ( ) . The three occurrences of express the union of the effects of f and g. The application of compose above has effect C : Pre; 3 . Concerning exceptions as first-class values, the first example from section 2.2 becomes: let test = exn. try raise(exn) with x match x with C 1 | y raise(y) in test(C) The type scheme for test is , , . exn[C : Pre; ] - - int[1 : Pre; ], expressing that the function raises whatever exception it receives as argument, except C. The application test(C) has thus type int[1 : Pre; 1 ] and effect C : 2 ; 2 . Hence no exception escapes. The application test C' where C' is another exception distinct from C would have effect C : 3 ; C : Pre; 3 , thus showing that C' may escape. Finally, here is an (anecdotal) example that is ill-typed in ML, but well-typed in our type system due to the refined typing of pattern-matching: match 1 with x -> x | e -> raise e Since the first case of the matching is a catch-all, rule 6 lets us assign the type exn[ ] for a fresh to the variable e bound by the second case, even though the matched value is an integer. Hence the expression is well-typed, and moreover we obtain that it has type int[1 : Pre; ] and raises no exceptions (its effect is for any ). 7 C : ; D(int[1 ]);2 3.5 Type soundness and correctness of the exception analysis We now establish the correctness of our exception analysis: all uncaught exceptions are predicted by our effect system. This property is closely connected to the type soundness of our system. Theorem 1 (Subject reduction) Reduction preserves typing: if E0 a : / and a a , then E0 a : / The proof of subject reduction is mostly standard and follows [33] closely. Detailed proofs of the statements in this paper can be found in the technical report [14]. A key lemma is the following property of pattern subtraction: Lemma 2 (Correctness of subtraction) If E0 v : / and M (v, p) is undefined (v does not match pattern p) and - p Y , then E0 v : /. The correctness of our exception analysis (all uncaught exceptions are detected) is a simple corollary of subject reduction: Theorem 3 (Correctness of exception analysis) Let a be a complete program. Assume E0 a : / and a raise v. Then, either v = C and = C : Pre; for some C and , or v = D(v ) and = D( ); and E0 v : / for some D, v , , . In either case, the uncaught exception v is correctly predicted in the effect . Type soundness for our non-standard type system follows from the subject reduction property and the following lemma showing that well-typed expressions either reduce to a value or to an uncaught exception, or loop, but never get "stuck". Lemma 4 (Progress) If E0 a : /, then either a is a value v, or a is an uncaught exception raise v, or there exists a such that a a . 3.6 Principal types and inference of types and exceptions Just like the ML type system, our type system admits principal types, which can be computed by a simple extension of Milner's algorithm W , thus implementing the exception analysis. Theorem 5 (Principal unifiers) The set of well-kinded types modulo equations (1) and (2) admits principal unifiers. More precisely, there exists an algorithm mgu that, for all system Q of well-kinded equations between types, either returns a substitution that is a principal solution of Q, or fails, meaning that Q has no solution. Moreover, the substitution preserves kinds in the following sense: for all , () wf and for all , () :: K(). In the theorem above, systems of well-kinded equations are sets Q = {i = i ; j = j ; l = l } of equations between types, rows, row elements, and presence annotations such that for all i, i wf and i wf, and for all j, there exists a kind j such that j :: j and j :: j . The existence of principal unifiers follows from the fact that our equational theory is syntactic and regular [23]. The algorithm mgu is given in appendix A. Theorem 6 (Principal types) There exists a type inference algorithm W satisfying the following conditions: (Correctness) If E is well-kinded and (, , ) = W (E, a) is defined, then (E) a : /. (Completeness) If E is well-kinded and there exists a kind-preserving substitution and types , such that (E) a : / , then (, , ) = W (E, a) is defined and there exists a substitution such that = ( ) and = () and (v) = ((v)) for all type, row or presence variable v not used as a fresh variable by algorithm W . The algorithm W is shown in appendix B. 4 Extension to the full Objective Caml language During inference, tl and l receive types intlist[1 ] and intlist[Cons(int[2 ] intlist[1 ]); 3 ] respectively. If only finite type expressions are allowed, those two types have no unifier and the program is rejected by the analysis. This is not acceptable, so we extend our type system with recursive (infinite, regular) type expressions. On the example above, we obtain . intlist[Cons(int[2 ] ); 3 ]. The extension of our type system with recursive type expressions involves replacing term unification by graph unification in the type inference algorithm, but this causes no technical difficulties. 4.1.2 "Looped" approximations for recursive datatypes In this section, we discuss the main issues in extending the analysis presented in section 3 to deal with the whole Objective Caml language [15]. 4.1 Datatypes User-defined datatypes (sum types) can be approximated in several different ways, depending on the desired trade-off between precision and speed of the analysis. We have considered the four approaches listed below (from most precise to least precise). 4.1.1 Full approximation of datatypes The first approach applies to datatypes the same treatments as for exceptions: we annotate the type by a row approximating the possible values of that type, as constant constructors with presence annotations, and unary constructors with types of arguments. Consider the source-level datatype definition type t = C1 | . . . | Cn | D1 of 1 | . . . | Dm of m where the i are unannotated ML types. The propagation of approximations is captured by the following type schemes assigned to the constructors Ci and Di : Ci Di : : , . t[Ci : Pre; ] , , , . i t[Di (i ); ] The approximation scheme described above has the undesirable side-effect of recording in the type approximation the whole structure of a data structure given in extension. If the data types involved are recursive, we may end up with very large type approximations. Continuing the intlist example above, consider the expression With the n = Cons(i1 , Cons(i2 , . . . , Cons(in , Nil) . . .)). type of Cons given above, this expression is given an annotated type that is of depth n and records not only the fact that the list contains the integers i1 . . . in (an information that might be useful to analyze exceptions), but also the fact that the list has length n and that its first element is i1 , the second i2 , etc. The latter piece of information is, on practical examples, useless for analyzing exceptions. Moreover, such large approximations slow down the analysis. A solution to this problem comes from the following remark: as soon as one of those big data structures given in extension is passed to a sufficiently complex function, its big, unfolded annotated type is going to be unified with a recursive type, forcing all the information in the big type to be folded back into a smaller recursive type. For instance, if we pass the list n to the tail function shown above, the type of the list will be unified into n = . intlist[Cons(int[i1 : Pre; . . . ; in : Pre; 1 ] ); Nil : Pre; 2 ] The idea, then, is to force this folding into a recursive type when the data structure is created, by giving recursive, prefolded types to the data type constructors. This is easily achieved by unifying, in the type of the constructors, all occurrences of the recursively-defined type in argument position with the occurrence of the recursively-defined type in result position. For instance, in the case of the Cons constructor of type intlist, we start with the type 3 int[1 ] intlist[2 ] where i is the annotated type obtained from i by adding distinct fresh row variables taken from on every type constructor that carries a row annotation. For instance, given the declaration type intlist = Nil | Cons of int * intlist we assign Nil and Cons the type schemes Nil : Cons : . intlist[Nil : Pre; ] 3 1 , 2 , 3 , 4 . int[1 ] intlist[2 ] intlist[Cons(int[1 ] intlist[2 ]); 4 ] intlist[Cons(int[1 ] intlist[2 ]); 4 ] Recursive datatypes such as intlist above naturally lead to recursive type expressions. Consider: let tail = x. match x with Cons(hd,tl) tl | l l as in the previous section, then unify the two underlined intlist types, then generalize the free variables, 3 obtaining Cons : 1 , 3 , 4 . int[1 ] where is .intlist[Cons(int[1 ] ); 4 ]. With this type for Cons, the list n is given the reasonably compact type n shown above. This technique of "looping" the types of constructors also works for parameterized datatypes, as long as they are regular (the data type constructor is used with the same parameters in the argument types of the constructors). For non-regular datatypes such as type 'a nreg = Leaf of 'a | Node of 'a list nreg 8 the unification of the occurrences of t in the type of Node would render that constructor essentially useless. Fortunately, such non-regular data types are extremely rare in actual programs, so we can use full approximations for them without impacting performance. 4.1.3 Adding row parameters to datatypes we can use a (potentially recursive) row enumerating all constructors of the datatype instead of a built-in constant . In the case of lists, for instance, we can use the following "top row" Tlist (, ): Tlist (, ) = . Nil : Pre; Cons( list[ ]); The annotated type list[Tlist (, )] correctly represents any list of elements of type . The "no approximation" approach described in this paragraph may look excessively coarse, but is actually quite effective for datatypes that introduce no base types, exception types, nor function types. Prominent examples are the builtin ML types list and array, where the parameter already records all the information we need about list and array elements. For instance, a list of functions from inte2 gers to booleans has type (int[1 ] bool[3 ]) list, where 2 denotes the union of the effects of all functions present in the list. A function extracted from that list and applied has effect 2 , and not any exception as one might naively expect. 4.1.5 Choosing a datatype approximation An alternative to annotating datatype constructors with rows is to add row parameters to the type constructor reflecting the row annotations on exn, int and function types contained within the datatype. This technique is used by Fhndrich et al [4]. For instance, the ML datatype definia tion type t = A of int | B of exn | C of t | D of t is turned into type (1 , 2 ) t = A of int[1 ] | B of exn[2 ] | C of (1 , 2 ) t | D of (1 , 2 ) t Two parameters 1 and 2 were added in order to reflect in the type t the possible values of types int and exn contained in that type. The type t itself is not annotated by a row recording which constructors A, B, C and D are present in values of that type. The net effect is to forget the structure of terms of type t, while correctly remembering the integers and exception values contained in the structure. In practice, this solution appears to be slightly less precise and slightly more efficient than full approximations of non-recursive datatypes and looped approximations of recursive datatypes: type expressions are smaller, but in the case of t above, looped approximations can express the fact that a value of type t lack one of the constructors C or D, while this is not captured in the solution based on extra row parameters. On datatypes that are not annotated by a row, we can no longer perform type subtraction during pattern-matching, since we have no approximation on the structure of values of that type. Hence, we simply consider that subtraction is the identity relation on those datatypes. 4.1.4 Datatypes without any approximations The choice between the four datatype analysis strategies described above can be done on a per-datatype basis, depending on the shape of the datatype definition. We have considered several simple heuristics to perform this choice. Our first prototype used full approximations for non-parameterized datatypes, and no approximations for parameterized datatypes. Our current prototype uses full approximations for non-recursive or non-regular datatypes, looped approximations for recursive datatypes, and no approximations for built-in types without interesting structure (arrays and floating-point numbers, for instance). Another factor that we plan to integrate in the heuristic is whether the datatype introduces any exception type, function type, or base type likely to be an exception argument (string and int, essentially); if not, we could favor the "no approximation" approach. 4.2 Tuples and records For maximal speed and minimal precision, we can put no annotations at all on a datatype (neither a row approximation nor extra row parameters). This way, we forget not only the structure of values of that type, but also the exceptions, functions and base values contained in that type. Of course, this forces us to make very pessimistic assumptions on values extracted from a datatype without approximation. For instance, if we extract an integer by pattern-matching on such a datatype, we must give it type int[ ] since it can really be any integer. This is reflected in the types of constructors by putting annotations on all annotated types in the constructor argument. In the intlist example above, if we choose not to annotate intlist at all, we must give its constructors the following types: Nil : intlist 1 Cons : 1 . int[ ] intlist intlist This approach assumes that we have annotations for all types, while the type system from section 3 only has for type int. However, we can allow to annotate other base types such as float and string. For exceptions and other datatypes, since there are finitely many constructors, 9 Tuple types are not approximated specially: each component of the tuple type carries its own annotation. For instance, int[1 : Pre; 2 : Pre; ] int[3 : Pre; 4 : Pre; ] stands for the set of four pairs {1; 2} {3; 4}. Pattern subtraction on tuple types is not pointwise subtraction, which would lead to incorrect results. Consider the type int[1 : Pre; ] int[2 : Pre; 3 : Pre; ]. Subtracting pointwise the pattern (1, 2) from this type would lead to type int[1 : ; ] int[2 : ; 3 : Pre; ], which is incorrect since the value (1, 3) is no longer in the set. Therefore, the current implementation perform no subtraction on tuples: we take (1 2 ) - (p1 , p2 ) Y 1 2 . For a more refined behavior, we could perform subtraction on one of the components if all other components are matched against catch-all patterns. For instance, we could take (1 2 ) - (p1 , x2 ) Y 1 2 if 1 - p1 Y 1 . Unlike in SML, records in Caml are declared and matched by name. We analyze them like datatypes, by annotating the name of the record type by a row of a particular form. The row contains exactly one element recording the annotated type of every field. Pattern subtraction for record types behaves as in the case of tuples. To summarize, the extended type algebra for datatypes, tuples and records is as follows: Type expressions: ::= . . . | t approximated type constructor | t non-approximated type constructor | 1 . . . n tuple type Row elements: ::= . . . | {lbl 1 : 1 ; . . . ; lbl n : n } 4.3 Mutable data structures Mutable data structures (references, arrays, records with mutable fields) are trivially handled: it suffices to introduce the standard value restriction on let-generalization [34]. This results in a precise approximation of mutable data. For instance, an array of functions has type (1 2 ) array, where is the union of the latent effects of all functions stored in the array. In contrast, control-flow analyses would lose track of which functions are stored in the array, and thus also of the exceptions they may raise, unless supplemented by a region (aliasing) analysis. 4.4 Objects and classes the generativity of exception declaration in functor bodies, and the impact of the "exception polymorphism" offered by functors (a functor can take one or several exceptions as arguments, and have a different exception behavior depending on whether those arguments are instantiated later with identical or different exceptions). For simplicity, we chose not to analyze functors when they are defined, but instead expand the functor body at each application and re-analyze the -reduced body. Although this transformation increases the size of the analyzed source, the Caml programs we are interested in do not use functors intensively and this simple approach to analyzing functors works well in practice. 4.6 Separate analysis Because our system already uses recursive types, OCamlstyle objects do not add significant complexity to our framework. We just need to extend the type algebra with object types, that is, polymorphic records of methods [21]. The type of each method is annotated by its latent effect. No extension to rows and row elements are needed. Since there are no object patterns in pattern-matching, pattern subtraction needs not be modified. The OCaml class language interferes very little with the exception analysis. No significant modifications to the class type-checker are needed. 4.5 Modules and functors Transparent signature matching precludes "true" separate analysis (where any module can be analyzed separately knowing only the syntactic signatures of the modules it imports). We can still do "bottom-up" separate analysis, however: a module can be analyzed separately provided the implementations of its imports have been analyzed already, and their annotated signatures inferred. Since annotated signature for a module may contain free row variables (e.g. if the module defines mutable structures), separately analyzing several clients of that module may result in independent instantiations of those free variables. Those instantiations are recorded in the result of the analysis of each module, and reconciled in a final "linking" pass before displaying the results of the analysis. 4.7 Polymorphic recursion Polymorphic recursion as introduced by Mycroft [17] is not needed to type-check the source OCaml language, but is desirable to enhance the precision of our exception analyzer. With ML-style monomorphic recursion, we obtain false positives on functions that recursively call themselves inside a try. . . with. Consider: let rec f = x. try f(x) with C () | y raise y The latent effect inferred for f is C; because the effect of f(x) is unified with the type of the pattern C at a time where the type of f is not yet generalized. With polymorphic recursion, we can assign f the type scheme , . unit both outside and inside the recursion; it is a fresh instance of that type scheme that gets unified with the type of C, thus not polluting the type scheme of f. Although type inference with polymorphic recursion is undecidable [13], there exists incomplete inference algorithms that work very well in practice. We experimented with Henglein's algorithm [11] and with a home-grown algorithm based on restricted fixpoint iteration and obtained good results. 5 Experimental results Structures are assigned annotated signatures containing annotated types for the value components. Type abbreviations are currently handled by systematic expansion of their definitions3 . For matching a structure S against a signature , there are two possible semantics. The opaque semantics says that the only things known about the restriction (S : ) is what publicizes. In our case, since user-provided signatures contain no annotations, this amounts to forgetting the result of the analysis of S and assume annotation on all value components of the restricted structure. The transparent semantics simply check that S matches , but the restriction (S : ) retains all information known about S. We implemented the transparent semantics, as the opaque semantics results in too much information loss. (The opaque semantics also precludes choosing datatype annotations based on the definition of the datatype.) Similar problems arise with functors. All is known about the parameter of a functor is its syntactic signature. Hence, a naive analysis would assume annotation on all components of the functor argument. For better precision, one could use techniques based on conjunctive types such as [25]. Other issues with functors are still unclear, such as 3 This might cause performance problems in conjunction with OCaml objects, which relies intensively on type abbreviations to make type expressions more manageable [21]. If this turns out to be a problem, we could also handle abbreviations by adding extra row parameters to the type constructors, as described in [4] and in section 4.1.3. In this section, we present some experimental results obtained with our implementation. Currently, our analyzer implements all extensions described in section 4 except objects4 . The analyzer is compiled with the OCaml 2.00 native-code compiler and runs on a Pentium II 333 Mhz workstation under Linux. 4 The analysis of objects and classes was prototyped separately and remains to be merged in our main implementation. 10 Test program 1. 2. 3. 4. 5. 6. 7. 8. 9. Huffman compression Knuth-Bendix Docteur (Eliza clone) Lexer generator Nucleic OCaml standard library Analyzer of .h files Our exception analyzer The OCaml bytecode compiler Size (lines) 233 441 556 1169 2919 3082 3088 12235 17439 Analysis time 0.07/0.08 0.14/0.16 0.81/0.83 0.27/0.32 1.90/1.88 2.52/2.52 0.54/0.58 10.3/16.1 12.6/22.9 s s s s s s s s s Analysis speed (lines per sec.) 3300/2900 l/s 3200/2800 l/s 680/670 l/s 4300/3700 l/s 1530/1550 l/s 1200/1200 l/s 5700/5300 l/s 1200/760 l/s 1400/760 l/s OCaml typechecking time 0.08 s 0.14 s 0.10 s 0.20 s 0.62 s 1.89 s 0.27 s 3.86 s 4.00 s Figure 4: Experimental results (without polymorphic recursion/with polymorphic recursion) Analysis speed: Figure 4 gives timings for the analysis of various small to medium-sized OCaml programs. We give timings both without and with polymorphic recursion. For comparison, we also give the time OCaml takes to parse and type-check those programs. (The timings given include parsing and pre-processing as well as analysis time.) The overall performances are quite good, in the order of 10002000 lines of source per second. Programs that contain large data structures given in extension (Nucleic, Docteur) take longer to analyze due to the large size of the rows annotating the types of those data structures. On average, the exception analysis takes twice as much time as OCaml type inference; the ratio ranges between 1 (on simple programs) and 8 (on Docteur, because of the large constant data structures). Polymorphic recursion slows down the analysis somewhat on the largest benchmarks, but the slowdown remains acceptable compared with the increase in precision. Precision of the analysis: We have manually inspected the output of the analyzer on our benchmark programs. Programs 1, 3, 4, 5 and 7 have a relatively simple exception behavior, and our analysis reports exact results for those programs: there are no false positives except run-time errors such as "division by zero" or "array index out of bounds", which require extra analyses (or even general program proof) to show that they cannot occur. For Knuth-Bendix, which has a quite complicated exception structure, 8 exceptions (Failure with 8 different string arguments) appearing in the source are correctly reported as non-escaping; 7 exceptions (one Invalid_argument and 6 Failure) are reported as potentially escaping, and can actually occur in some circumstances. Without polymorphic recursion, the analysis reports two false positives (one Not_found and one Failure), which correspond to recursive functions containing try . . . with around recursive calls. Adding polymorphic recursion as discussed in section 4.7 removes one of those false positives. The other one is still there, because our incomplete inference algorithm for polymorphic recursion fails to give a type polymorphic enough to one of the functions. A more precise algorithm such as Henglein's [11] would probably eliminate the other false positive as well. The larger examples 8 and 9 exhibit another source of false positives: mutable data structures (references and arrays) containing functions. As mentioned in section 4.3, the row variables appearing in approximations of mutable data structures are not generalized, hence "collect" all exceptions at their use sites. For instance: let r = ref(x. ...) in 11 let f = y. if cond then !r y else raise C in !r 0 r has type int int where is not generalized. When typing f, the effect of raise C is unified with that of !r y, hence becomes C : Pre; and the application !r 0 appears to raise C. 6 6.1 Related work Exception analyses for ML Several exception analyses for ML are described in the literature. Guzmn and Surez [8] develop a simple type and a a effect system to keep track of escaping exceptions. Their system does not handle exceptions as first-class values, nor exceptions carrying arguments. The first exception analysis proposed by Yi [36] is based on general abstract interpretation techniques, and runs too slowly to be usable in practice. Later, Yi and Ryu [35] developed a more efficient analysis roughly equivalent to a conventional control-flow analysis to approximate the call graph and the values of exceptions, followed by a data-flow analysis to estimate uncaught exceptions. Fhndrich and Aiken [3, 4] have applied their BANE a toolkit for constraint-based program analyses to the problem of analyzing uncaught exceptions in SML. Their system uses a combination of inclusion constraints (as in controlflow analyses) to approximate the control flow, and equality constraints (unification) between annotated types to keep track of exception values. To compare performances between [35], [3] and our analyzer, we used two of our benchmarks for which we have a faithful SML translation: Knuth-Bendix and Nucleic. The times reported below are of the form t1 /t2 , where t1 is the time spent in exception analysis only, and t2 is the total program analysis time, including parsing and type-checking in addition to exception analysis. Test program Knuth-Bendix Nucleic Yi-Ryu 1.2/1.5 s 3.8/7.8 s BANE 1.6/2.2 s 3.3/7.6 s us 0.06/0.14 s 1.45/1.86 s From these figures, our exception analysis seems notably faster. However, there are many external factors that influence the total running times of the analyses (such as the YiRyu and BANE analyses being compiled by SML/NJ while ours is compiled by Objective Caml), so the figures above are not conclusive. The main difference between the analyses of [35, 3] and ours is the approximation of arguments carried by exceptions: they approximate only exception and function values carried by exceptions, but our analysis is the only one that also approximates exception arguments that are strings, integers, or datatypes. As explained in section 2.3, approximating all arguments of exceptions is crucial to obtain precise analysis of many real applications. In theory, our unification-based analysis should be less precise than analyses based on inclusion constraints such as [35, 3]: the bidirectional propagation of information performed by unification causes exception effects to "leak" in types where those exceptions cannot actually occur. It is easy to construct artificial examples of such leaks, e.g. by replacing let-bound identifiers by -bound identifiers. However, those examples do not seem to occur in actual programs. The only leaks we observed in actual programs were related either to deficiencies of our incomplete algorithm for typing polymorphic recursion, or to functions contained inside mutable data structures. On those two cases, [3] obtains more precise results than our analysis. 6.2 Other related work excellent (although its theoretical complexity is at least as high as that of ML type inference). In turn, this good efficiency of our analysis allows us to keep more information on exception arguments than the other exception analyses, increasing greatly the precision of the analysis on certain ML programs. Thus, we see an interesting case of "less is more", where an a priori imprecise technology (unification) allows to improve eventually the precision of the analysis. Some engineering issues remain to be solved before our analysis can be applied to large ML applications. The main practical issue is displaying the results of the analysis in a readable way. The volume of information contained in annotated type expressions can be overwhelming. The programmer should be able to select different levels of display abstracting some of that information. Acknowledgements The inference algorithm for polymorphic recursion used in our implementation was designed in collaboration with Pierre Weis. We thank Franois Pottier and Didier Rmy c e for interesting discussions. References [1] H. G. Baker. Unify and conquer (garbage, updating, aliasing, . . . ) in functional languages. In Lisp and Functional Programming 1990. ACM Press, 1990. [2] M. Fhndrich and A. Aiken. Making set-constraint a based program analyses scale. Technical Report 96-917, University of California at Berkeley, Computer Science Division, 1996. [3] M. Fhndrich and A. Aiken. Program analysis using a mixed term and set constraints. In Static Analysis Symposium '97, number 1302 in LNCS, pages 114126. Springer-Verlag, 1997. [4] M. Fhndrich, J. S. Foster, A. Aiken, and J. Cu. Tracka ing down exceptions in Standard ML programs. Technical report, University of California at Berkeley, Computer Science Division, 1998. [5] M. Fhndrich, J. S. Foster, Z. Su, and A. Aiken. Partial a online cycle elimination in inclusion constraint graphs. In Prog. Lang. Design and Impl. 1998, pages 8596. ACM Press, 1998. [6] C. Flanagan and M. Felleisen. Componential set-based analysis. In Prog. Lang. Design and Impl. 1997. ACM Press, 1997. [7] T. Freeman and F. Pfenning. Refinement types for ML. In Prog. Lang. Design and Impl. 1991, pages 268277. ACM Press, 1991. [8] J. C. Guzmn and A. Surez. A type system for exa a ceptions. In Proc. 1994 workshop on ML and its applications, pages 127135. Research report 2265, INRIA, 1994. [9] N. Heintze. Set-based analysis of ML programs. In Lisp and Functional Programming '94, pages 306317. ACM Press, 1994. [10] F. Henglein. Global tagging optimization by type inference. In Lisp and Functional Programming 1992. ACM Press, 1992. 12 Our use of rows with row variables and presence annotations to approximate values of base types and sum types is essentially identical to Rmy's typing of extensible variants [22]. e Another application of Rmy's encoding is the soft typing e system for Scheme of [32]. There is a natural connection between exception analysis and type inference for extensible variants: using the well-known functional encoding of exceptions (where each subexpression is transformed to return a value of a variant type, either an exception tag or NormalResult(v) where v is the value of the subexpression), estimating uncaught exceptions is equivalent to inferring precise variant types. Pottier [20] outlines an exception analysis thus derived from a type inferencer for ML with subtyping. Refinement types [7] also introduce annotations on types to characterize subsets of ML's data types. Our approach is less ambitious than refinement types, in that it does not try to capture "deep" structural invariants of recursive data structures; on the other hand, type inference is much easier. The principles of effect systems were studied extensively in the early '90s [16, 28], but few practical applications have been developed since. An impressive application is the region analysis of Tofte et al. [30, 29]. Like ours, its precision is improved by typing recursion polymorphically. Several program analyses based on unification and running in quasi-linear time have been proposed as faster alternatives to more conventional dataflow analyses. Two wellknown examples are Henglein's tagging analysis [10] and Steensgaard's aliasing analysis [27]. Baker [1] suggests other examples of unification-based analyses. 7 Conclusions and future work It is often said that unification-based program analyses are faster, but less precise than more general constraint-based analyses such as CFA or SBA. For exception analysis, our experience indicates that a combination of unification, letpolymorphism, and polymorphic recursion is in practice almost as precise as analyses based on inclusion constraints. (The only case where our analysis is noticeably less precise than inclusion constraints is when references to functions are used intensively.) The running times of our algorithm seem [11] F. Henglein. Type inference with polymorphic recursion. ACM Trans. Prog. Lang. Syst., 15(2):253289, 1993. [12] S. Jagannathan and A. Wright. Polymorphic splitting: An effective polyvariant flow analysis. ACM Trans. Prog. Lang. Syst., 20(1):166207, 1998. [13] A. J. Kfoury, J. Tiuryn, and P. Urzyczyn. Type reconstruction in the presence of polymorphic recursion. ACM Trans. Prog. Lang. Syst., 15(2):290311, 1993. [14] X. Leroy and F. Pessaux. Type-based analysis of uncaught exceptions. Research report 3541, INRIA, Nov. 1998. Extended version of this paper. [15] X. Leroy, J. Vouillon, D. Doligez, et al. The Objective Caml system. Software and documentation available on the Web, http://caml.inria.fr/ocaml/, 1996. [16] J. M. Lucassen and D. K. Gifford. Polymorphic effect systems. In 15th symp. Principles of Progr. Lang., pages 4757. ACM Press, 1988. [17] A. Mycroft. Polymorphic type schemes and recursive definitions. In International Symposium on Programming, number 167 in LNCS, pages 217228. SpringerVerlag, 1984. [18] A. Ohori. A polymorphic record calculus and its compilation. ACM Trans. Prog. Lang. Syst., 17(6):844895, 1995. [19] F. Pottier. A framework for type inference with subtyping. In Int. Conf. on Functional Progr. 1998, pages 228238. ACM Press, 1996. [20] F. Pottier. Type inference in the presence of subtyping: from theory to practice. Research report 3483, INRIA, Sept. 1998. [21] D. Rmy and J. Vouillon. Objective ML: A simple e object-oriented extension of ML. In 24th symp. Principles of Progr. Lang., pages 4053. ACM Press, 1997. [22] D. Rmy. Records and variants as a natural extension e of ML. In 16th symp. Principles of Progr. Lang., pages 7788. ACM Press, 1989. [23] D. Rmy. Syntactic theories and the algebra of record e terms. Research report 1869, INRIA, 1993. [24] D. Rmy. Type inference for records in a natural exe tension of ML. In C. A. Gunter and J. C. Mitchell, editors, Theoretical Aspects of Object-Oriented Programming. MIT Press, 1993. [25] Z. Shao and A. Appel. Smartest recompilation. In 20th symp. Principles of Progr. Lang., pages 439450. ACM Press, 1993. [26] O. Shivers. Control-Flow Analysis of Higher-Order Languages. PhD thesis CMU-CS-91-145, Carnegie Mellon University, May 1991. [27] B. Steensgaard. Points-to analysis in almost linear time. In 23rd symp. Principles of Progr. Lang., pages 3241. ACM Press, 1996. [28] J.-P. Talpin and P. Jouvelot. The type and effect discipline. Inf. and Comp., 111(2):245296, 1994. [29] M. Tofte and L. Birkedal. A region inference algorithm. ACM Trans. Prog. Lang. Syst., 1998. To appear. [30] M. Tofte and J.-P. Talpin. Region-based memory management. Inf. and Comp., 132(2):109176, 1997. [31] M. Wand. Complete type inference for simple objects. In Logic in Computer Science 1987, pages 3744. IEEE Computer Society Press, 1987. [32] A. K. Wright and R. Cartwright. A practical soft type system for Scheme. ACM Trans. Prog. Lang. Syst., 19(1):87152, 1997. [33] A. K. Wright and M. Felleisen. A syntactic approach to type soundness. Inf. and Comp., 115(1):3894, 1994. [34] A. K. Wright. Simple imperative polymorphism. Lisp and Symbolic Computation, 8(4):343356, 1995. [35] K. Yi and S. Ryu. Towards a cost-effective estimation of uncaught exceptions in SML programs. In Static Analysis Symposium '97, number 1302 in LNCS, pages 98113. Springer-Verlag, 1997. [36] K. Yi. An abstract interpretation for estimating uncaught exceptions in Standard ML programs. Sci. Comput. Programming, 31(1):147173, 1998. A The unification algorithm In this appendix, we give the unification algorithm for our type algebra modulo the two equations (1) and (2). We define the head constructor H() of a row element as follows: H(i : ) = i H(C : ) = C H(D( )) = D The algorithm handles the left commutativity axiom (equation (1)) like in [24]. mgu() = id Unification between types: mgu({ = } Q) = mgu(Q) mgu({ = } Q) = mgu(Q{ }) { } if F V ( ) / mgu({ = } Q) = mgu(Q{ }) { } if F V ( ) / mgu({int[1 ] = int[2 ]} Q) = mgu({1 = 2 } Q) mgu({exn[1 ] = exn[2 ]} Q) = mgu({1 = 2 } Q) 1 2 mgu({1 1 = 2 2 } Q) = mgu({1 = 2 ; 1 = 2 ; 1 = 2 } Q) Unification between rows: mgu({ = } Q) = mgu(Q) mgu({ = } Q) = mgu(Q{ }) { } if F V () / mgu({ = } Q) = mgu(Q{ }) { } 13 if F V () / mgu({ = } Q) = mgu(Q) mgu({(i : ; ) = } Q) = mgu({ = Pre; = } Q) mgu({ = (i : ; )} Q) = mgu({ = Pre; = } Q) mgu({(1 ; 1 ) = (2 ; 2 )} Q) = mgu({1 = 2 } Q) if H(1 ) = H(2 ) mgu({(1 ; 1 ) = (2 ; 2 )} Q) = mgu({1 = (2 ; ); 2 = (1 ; )} Q) if H(1 ) = H(2 ) and is not free in the l.h.s. and has kind t(S {H(1 ), H(2 )}) where t(S) is the kind of 1 ; 1 and 2 ; 2 Unification between row elements: mgu({(i : 1 ) = (i : 2 )} Q) = mgu({1 = 2 } Q) mgu({(C : 1 ) = (C : 2 )} Q) = mgu({1 = 2 } Q) mgu({D(1 ) = D(2 )} Q) = mgu({1 = 2 } Q) Unification between presence annotations: mgu({ = } Q) = mgu(Q) mgu({ = } Q) = mgu(Q{ }) { } if = mgu({ = } Q) = mgu(Q{ }) { } if = mgu({Pre = Pre} Q) = mgu(Q) If none of the cases above is applicable, mgu(Q) is undefined. B The type inference algorithm If a is let x = a1 in a2 : let (1 , 1 , 1 ) = W (E, a1 ) let (2 , 2 , 2 ) = W (1 (E) {x : Gen(1 , 1 (E), 1 )}, a2 ) let = mgu{2 (1 ) = 2 } take = (2 ) and = (2 ) and = 2 1 . If a is match a1 with p a2 | x a3 : let (1 , 1 , 1 ) = W (E, a1 ) let (E , , ) = Patsubtr(p, 1 ) let (2 , 2 , 2 ) = W ((1 (E)) E , a2 ) let (3 , 3 , 3 ) = W (2 ((1 (E))) {x : 2 ( )}, a3 ) let = mgu{3 (2 ) = 3 , 3 (2 ) = 3 , 3 (2 ((1 ))) = 3 } take = (3 ) and = (3 ) and = 3 2 1 . If a is C: let be a fresh variable of kind EXN({C}) let be a fresh variable of kind EXN() take = exn[C : Pre; ] and = and = id . If a is D(a1 ): let (1 , 1 , 1 ) = W (E, a1 ) let 2 = Inst(TypeArg(D)) let = mgu{2 = 1 } let be a fresh variable of kind EXN({D}) let be a fresh variable of kind EXN() take = exn[D((1 )); ] and = and = 1 . If a is try a1 with x a2 : let (1 , 1 , 1 ) = W (E, a1 ) let (2 , 2 , 2 ) = W (1 (E) {x : exn[1 ]}, a2 ) let = mgu{2 (1 ) = 2 } take = (2 ) and = (2 ) and = 2 1 . The auxiliary function Inst() (trivial instance): Inst(i , j , k . ) is {i i , j j , k k } where i , j , k are fresh variables such that j and j have the same kind for all j. The auxiliary function Patsubtr (typing of patterns and pattern subtraction): Patsubtr(p, ) is the triple (E, , ) defined by induction on p as follows: If p is x: let be a fresh variable take E = {x : } and = and = id . If p is i: let be a fresh variable of kind INT({i}) let = mgu{ = int[i : Pre; ]} let be a fresh presence variable take E = and = int[i : ; ()] and = . If p is C: let be a fresh variable of kind EXN({C}) let = mgu{ = exn[C : Pre; ]} let be a fresh presence variable take E = and = exn[C : ; ()] and = . If p is D(p1 ): let 1 = Inst(TypeArg(D)) let (E1 , 1 , 1 ) = Patsubtr(p1 , 1 ) let be a fresh variable of kind EXN({D}) let = mgu{ = exn[D(1 (1 )); ]} take E = (E1 ) and = exn[D((1 )); ()] and = 1 . The result of the algorithm W (E, a) is the triple (, , ) defined by induction on a as follows: If a is x (with x Dom(E)): let be a fresh variable of kind EXN() take = Inst(E(x)) and = and = id . If a is i: let be a fresh variable of kind INT({i}) let be a fresh variable of kind EXN() take = int[i : Pre; ] and = and = id . If a is x. a1 : let be a fresh variable let (1 , 1 , 1 ) = W (E {x : }, a1 ) let be a fresh variable of kind EXN() take = 1 () 1 and = and = 1 . If a is a1 (a2 ): let (1 , 1 , 1 ) = W (E, a1 ) let (2 , 2 , 2 ) = W (1 (E), a2 ) let be a fresh variable 2 let = mgu{2 (1 ) = 2 , 2 (1 ) = 2 } take = () and = (2 ) and = 2 1 . 14 ...
View Full Document

Ask a homework question - tutors are online