Fig 3 - uploaded by Neil D. Jones
Content may be subject to copyright.
A set of size-change graphs that safely describes Ω's nonterminating computa- tion.  

A set of size-change graphs that safely describes Ω's nonterminating computa- tion.  

Source publication
Conference Paper
Full-text available
An algorithm is developed that, given an untyped λ-expression, can certify that its call-by-value evaluation will terminate. It works by an extension of the “size-change principle” earlier applied to first-order programs. The algorithm is sound (and proven so in this paper) but not complete: some λ-expressions may in fact terminate under call-by-va...

Context in source publication

Context 1
... 3. Figure 3 shows a graph set G that is safe for program Ω = (λx.x@x) (λy.y@y). ...

Similar publications

Article
Full-text available
In this paper, we present iterative and non-iterative methods for the solution of nonlinear optimal con- trol problems (NOCPs) and address the sufficient conditions for uniqueness of solution. We also study convergence properties of the given techniques. The approximate solutions are calculated in the form of a convergent series with easily comput...
Article
Full-text available
 The Wiener–Kolmogorov principle of minimizing the mean square estimation error is discussed in the framework of prediction theory, from both theoretical and practical points of view. Alternatives for suboptimal solutions, more easily computable and not requiring the explicit knowledge of the covariance function, are proposed. A robust version of t...

Citations

... There is a vast literature on techniques for proving termination all of which ultimately find their roots in the notion of well-founded metrics, introduced by Turing, 1949. Jones and Bohr, 2004and Sereni and Jones, 2005 embody this idea via the "size-change principle" that they use to verify termination of recursive functions, and which, can be rephrased as a contract to enable dynamic termination checking Nguyen et al., 2019. Proof assistants like Coq (Bertot and Castéran, 2004) and Isabelle (Wenzel, 2016) employ structural t ...
Preprint
Full-text available
Refinement types enrich a language's type system with logical predicates that circumscribe the set of values described by the type, thereby providing software developers a tunable knob with which to inform the type system about what invariants and correctness properties should be checked on their code. In this article, we distill the ideas developed in the substantial literature on refinement types into a unified tutorial that explains the key ingredients of modern refinement type systems. In particular, we show how to implement a refinement type checker via a progression of languages that incrementally add features to the language or type system.
... It consists in following arguments along function calls and checking that, in every potential loop, one of them decreases. First introduced for first-order functional languages, it has then been extended to many other settings: untyped λ-calculus [21], a subset of OCaml [32], Martin-Löf's type theory [38], System F [27]. ...
Preprint
Dependency pairs are a key concept at the core of modern automated termination provers for first-order term rewriting systems. In this paper, we introduce an extension of this technique for a large class of dependently-typed higher-order rewriting systems. This extends previous resultsby Wahlstedt on the one hand and the first author on the other hand to strong normalization and non-orthogonal rewriting systems. This new criterion is implemented in the type-checker Dedukti.
... The inductive reasoning assumes termination of expressions in the input program, which is verified independently using an existing termination checker. We use the Leon termination checker in our implementation [78], but other termination algorithms for higher-order programs [31,37,68] are also equally applicable. Note that memoization only affects resource usage and not termination, and lazy suspensions are in fact lambdas with unit parameters. ...
... Reachability Relation. We define a relation (similar to the calls relation of Sereni, Jones and Bohr [37,68]) that characterizes the environments that may reach an expression during an evaluation. Let Γ, e Γ ′ , e ′ iff there exists a semantic rule shown in Fig. 4 of the following form for some (possibly empty) set of antecedents A1, · · · , An (n ∈ N). ...
Article
Full-text available
We present a new approach for specifying and verifying resource utilization of higher-order functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? * size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented first-order program that accurately models the higher-order control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the first-order program by producing verification conditions of the form ∃ ∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heap-allocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s real-time queue and deque data structures that rely on aliasing of references to first-class functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala’s data-parallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heap-allocated objects.
... The inductive reasoning assumes termination of expressions in the input program, which is verified independently using an existing termination checker. We use the Leon termination checker in our implementation [78], but other termination algorithms for higher-order programs [31,37,68] are also equally applicable. Note that memoization only affects resource usage and not termination, and lazy suspensions are in fact lambdas with unit parameters. ...
... Reachability Relation. We define a relation (similar to the calls relation of Sereni, Jones and Bohr [37,68]) that characterizes the environments that may reach an expression during an evaluation. Let Γ, e Γ ′ , e ′ iff there exists a semantic rule shown in Fig. 4 of the following form for some (possibly empty) set of antecedents A1, · · · , An (n ∈ N). ...
Conference Paper
We present a new approach for specifying and verifying resource utilization of higher-order functional programs that use lazy evaluation and memoization. In our approach, users can specify the desired resource bound as templates with numerical holes e.g. as steps ≤ ? * size(l) + ? in the contracts of functions. They can also express invariants necessary for establishing the bounds that may depend on the state of memoization. Our approach operates in two phases: first generating an instrumented first-order program that accurately models the higher-order control flow and the effects of memoization on resources using sets, algebraic datatypes and mutual recursion, and then verifying the contracts of the first-order program by producing verification conditions of the form ∃ ∀ using an extended assume/guarantee reasoning. We use our approach to verify precise bounds on resources such as evaluation steps and number of heap-allocated objects on 17 challenging data structures and algorithms. Our benchmarks, comprising of 5K lines of functional Scala code, include lazy mergesort, Okasaki’s real-time queue and deque data structures that rely on aliasing of references to first-class functions; lazy data structures based on numerical representations such as the conqueue data structure of Scala’s data-parallel library, cyclic streams, as well as dynamic programming algorithms such as knapsack and Viterbi. Our evaluations show that when averaged over all benchmarks the actual runtime resource consumption is 80% of the value inferred by our tool when estimating the number of evaluation steps, and is 88% for the number of heap-allocated objects.
... Size-change analysis is a general method for automated termination proofs. In fact, this method has been applied in the termination analysis of higher-order programs [8], logic programs [2], and term rewrite systems [16]. ...
Article
Full-text available
We undertake the study of size-change analysis in the context of Reverse Mathematics. In particular, we prove that the SCT criterion is equivalent to $\Sigma^0_2$-induction over RCA$_0$.
... For proving the termination of rewrite relations on λ-terms, one can try to extend to λ-calculus techniques developed for first-order rewriting (e.g. [LSS92,vdP96,SWS01,JB04,FK12]) or, vice versa, adapt to rewriting techniques developed for λ-calculus (e.g. [JO91,Bla04,BR06]). ...
Article
In this paper, we show how to extend the notion of reducibility introduced by Girard for proving the termination of $\beta$-reduction in the polymorphic $\lambda$-calculus, to prove the termination of various kinds of rewrite relations on $\lambda$-terms, including rewriting modulo some equational theory and rewriting with matching modulo $\beta$$\eta$, by using the notion of computability closure. This provides a powerful termination criterion for various higher-order rewriting frameworks, including Klop's Combinatory Reductions Systems with simple types and Nipkow's Higher-order Rewrite Systems.
... LIQUIDHASKELL allows more precise analysis than catch, thus, by assigning the appropriate types to Error functions ( § 3) it tracks reachable incomplete patters as a side-effect of verification. Termination Analysis is crucial for LIQUIDHASKELL's soundness [39] and is implemented in a technique inspired by [41], Various other authors have proposed techniques to verify termination of recursive functions, either using the "size-change principle" [18,32], or by annotating types with size indices and verifying that the arguments of recursive calls have smaller indices [3,17]. To our knowledge, none of the above analyses have been empirically evaluated on large and complex real-world libraries. ...
Article
Full-text available
Haskell has many delightful features. Perhaps the one most beloved by its users is its type system that allows developers to specify and verify a variety of program properties at compile time. However, many properties, typically those that depend on relationships between program values are impossible, or at the very least, cumbersome to encode within the existing type system. Many such properties can be verified using a combination of Refinement Types and external SMT solvers. We describe the refinement type checker liquidHaskell, which we have used to specify and verify a variety of properties of over 10,000 lines of Haskell code from various popular libraries, including containers, hscolour, bytestring, text, vector-algorithms and xmonad. First, we present a high-level overview of liquidHaskell, through a tour of its features. Second, we present a qualitative discussion of the kinds of properties that can be checked -- ranging from generic application independent criteria like totality and termination, to application specific concerns like memory safety and data structure correctness invariants. Finally, we present a quantitative evaluation of the approach, with a view towards measuring the efficiency and programmer effort required for verification, and discuss the limitations of the approach.
... Unlike the above, we use refinements to obtain terminating fixpoints (tfix), which let us prove the vast majority (of sub-expressions) in real world libraries as non-diverging, avoiding the restructuring that would be required by the partiality monad. Termination Analyses Various authors have proposed techniques to verify termination of recursive functions, either using the "sizechange principle" [23,34], or by annotating types with size indices and verifying that the arguments of recursive calls have smaller indices [3,20]. Our use of refinements to encode terminating fixpoints is most closely related to [42], but this work also crucially assumes CBV semantics for soundness. ...
Article
Full-text available
SMT-based checking of refinement types for call-by-value languages is a well-studied subject. Unfortunately, the classical translation of refinement types to verification conditions is unsound under lazy evaluation. When checking an expression, such systems implicitly assume that all the free variables in the expression are bound to values. This property is trivially guaranteed by eager, but does not hold under lazy, evaluation. Thus, to be sound and precise, a refinement type system for Haskell and the corresponding verification conditions must take into account which subset of binders actually reduces to values. We present a stratified type system that labels binders as potentially diverging or not, and that (circularly) uses refinement types to verify the labeling. We have implemented our system in LIQUIDHASKELL and present an experimental evaluation of our approach on more than 10,000 lines of widely used Haskell libraries. We show that LIQUIDHASKELL is able to prove 96% of all recursive functions terminating, while requiring a modest 1.7 lines of termination-annotations per 100 lines of code.
... Size-based Termination Analyses have been used to verify termination of recursive functions, either using the "size-change principle" [2,17], or via the type system [15,29] by annotating types with size indices and verifying that the arguments of recursive calls have smaller indices. In work closely related to ours, Xi [37] encoded sizes via refinement types to prove totality of programs. ...
Article
Full-text available
SMT-based verifiers have long been an effective means of ensuring safety properties of programs. While these techniques are well understood, we show that they implicitly require eager semantics; directly applying them to a lazy language is unsound due to the presence of divergent sub-computations. We recover soundness by composing the safety analysis with a termination analysis. Of course, termination is itself a challenging problem, but we show how the safety analysis can be used to ensure termination, thereby bootstrapping soundness for the entire system. Thus, while safety invariants have long been required to prove termination, we show how termination proofs can be to soundly establish safety. We have implemented our approach in liquidHaskell, a Refinement Type-based verifier for Haskell. We demonstrate its effectiveness via an experimental evaluation using liquidHaskell to verify safety, functional correctness and termination properties of real-world Haskell libraries, totaling over 10,000 lines of code.
... It would be interesting to determine if techniques developed independently for testing with WQOs and the terminator literature can be ported over from one to the other. Finally, the use of homeomorphic embedding is also present in the static analysis world, where it is used to statically detect the termination of higher-order functions [15]. ...
Conference Paper
Full-text available
We describe a library-based approach to constructing termination tests suitable for controlling termination of symbolic methods such as partial evaluation, supercompilation and theorem proving. With our combinators, all termination tests are correct by construction. We show how the library can be designed to embody various optimisations of the termination tests, which the user of the library takes advantage of entirely transparently. Categories and Subject Descriptors D.3.2 [Programming Languages]: Language Classifications – Applicative (functional) languages