Termination of Untyped Lambda Calculus by Flow Analysis (invited talk)

Neil D. Jones

University of Copenhagen (emeritus)

An algorithm is developed that, given an untyped lambda-expression, can certify that its call-by-value evaluation will terminate. It works by an extension of the ``size-change principle'' earlier applied to first-order programs. The algorithm is sound but not complete (else the halting problem would have been solved): Some lambda-expressions may in fact terminate under call-by-value evaluation, but not be recognised as terminating.

The intensional power of size-change termination is reasonably high: It certifies as terminating all primitive recursive programs (including higher order primitive recursion), and many interesting, and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's ``minimum'' algorithm. Further, the approach allows free use of the Y combinator, and so can identify as terminating a substantial subset of PCF.

(joint work with Nina Bohr)

Slides available as .pdf

Obtaining Formal Semantics by Specialisation

Jørgen Steensgaard-Madsen

Ideally, semantics of programming language constructs may be derived by specialisation of definitions, provided the notion of definitions has an appropriate semantics. Details of definitions get eliminated and a rule-based semantics related to Dijkstra's wp-transformer can be obtained. The semantics of definitions is thus of primary concern, rather than of various specific constructs.

This possibility was indicated with the Algol 60 report, which -- with some reservation -- pointed out that semantics of a while-construct could be defined recursively from an if-construct.

Dijkstra's approach to semantics by wp-transformers can be generalised, and a language framework can be designed, so that semantics can be derived as described. The framework is general enough to cover declarations (including object instantiation) as well as statements and use of operator symbols.

The appropriate notion of define-semantics is essentially a formalisation of the copy-rule semantics, likewise dating back to Algol 60.

The presentation is intended to give an informal overview of concepts and existing tools. The objective is to establish collaboration on the foundational notions.

Slides available as .pdf

Soundness Is Not Sufficient

Fritz Henglein

University of Copenhagen

Static program analysis seeks to determine properties of a program's behavior, immediately subjecting it to Rice's Curse of inherent undecidability. Typically, this is addressed by only showing a particular analysis to be sound.

In this talk we argue that soundness by itself is insufficient as a general quality criterion and propose a number of ``goodness'' criteria that program analyses should be subjected to: fitness for a purpose, declarative specification, unimprovability, transformation invariance, compositional certification, constructive interpretation, adaptive computational performance.

This is to facilitate questions such as: Are you sure there does not exist another analysis that is both more precise and more efficient to compute than yours? Is your analysis more efficient than another because it is less precise or because it is more efficiently implemented? Can you represent and efficiently verify a program property asserted by your analysis, or does one need to reanalyze the program (and trust the soundness of the analysis)? Is your analysis robust under program transformations, or may the results swing wildly? Is the computational performance of the analysis predictable from some easily computed input parameter?

The talk illustrates some of the goodness criteria using examples from (time permitting) flow analysis, type error analysis and termination analysis and identifies related open problems.

Slides available as .pdf

Ad Hoc Static Analysis in the Dart Compiler: Store-to-Load Forwarding and Allocation Sinking

Kevin Millikin

Google

The phrase 'ad hoc' means designed or built for only one particular purpose. In non-academic compilers, ad hoc static analyses are used shamelessly. Such analyses are implemented for a particular compiler, working on a particular program representation, and at a particular place in the compilation pipeline. This non-idealized setting can be a curse when it imposes additional constraints on the implementation. It can also be a blessing when it allows the implementation to take shortcuts that would be unpalatable to a program committee, or to freely mix invention and off-the-shelf techniques.

By their nature ad hoc analyses are not intended to be generalized, if only because the value of doing so is less than the cost of preventing the developers moving on to some other compiler improvement. However, there can still be both practical and theoretical value in examining such analyses. We describe two such analyses and their implementation in the Dart programming language's optimizing compiler.

(joint work with Vyacheslav Egorov, Srdjan Mitrovic and Florian Schneider)

Ranking Functions for Linear-Constraint Loops

Amir Ben-Amram

Tel-Aviv Academic College

Ranking functions are a tool successfully used in termination analysis, complexity analysis, and program parallelization. Among the different types of ranking functions and approaches to finding them, this talk will concentrate on functions that are found by linear programming techniques: linear and lexicographic-linear ranking functions. The setting is that of a loop that has been pre-abstracted so that it is described by linear constraints over a finite set of numeric variables. I will review results (more or less recent) regarding the existence and complexity of complete solutions to these algorithmic problems.

(joint work with Samir Genaim)

Slides available as .pdf

Contract-based Programming with Ada 2012 - an Experience Report

Jacob Sparre Andersen

JSA Research & Innovation / AdaHeads K/S

The 2012 version of the Ada programming language standard includes checked "contracts" and "aspects" for subprograms and types. Some of these are by definition checked at compile-time, while other checks can be postponed to run-time, if a static analysis is unfeasible (or just not implemented).

At AdaHeads, we are currently developing a hosted telephone reception system, where the core component is written in Ada 2012. We picked Ada 2012 specifically to be able to use the contracts and aspects to increase our confidence that the software is correct.

Our experience so far is that GNAT-GPL-2013 (the only generally available Ada 2012 compiler) only implements static (compile-time) checking of contracts and aspects where it is required by the language standard. This means that for now, the big static analysis benefits of using Ada are related to the basic type system, which also existed in earlier versions of the standard, and the major benefit of switching to Ada 2012 at the moment is in the improved run-time checks.

Slides available as .pdf

From Natural Deduction to the Sequent Calculus by Passing an Accumulator

Matthias Puech

Aarhus University

In his seminal 1935 paper, Gentzen introduced the two well-known, equivalent calculi for deductive inference, natural deduction and the sequent calculus. The widespread intuition is that sequent calculus is obtained from natural deduction by "reversing" the elimination steps into left introduction rules. Yet, when introducing a new connective or a new variant of deduction, we must redefine a new pair of calculi and show that they are equivalent.

We revisit this relationship from an intuition to a system, and show how to derive the rules of a sequent calculus-style system from the rules of a natural deduction-style one: with the help of modern functional programming tools, we show that there is the same relationship between these two than there is between a recursive program and its iterative, accumulator-passing equivalent. Moreover, this program transformation can be expressed almost entirely by off-the-shelf program transformations used to compile functional programming languages.

More precisely, starting from the a type-checker for the bidirectional lambda-calculus, we apply successively a CPS transform and defunctionalization, as showcased by Danvy, then optimization and reforestation, and get a type-checker for a sequent calculus with focusing features, Herbelin's lambda-bar-calculus.

Slides available as .pdf

Extracting Configuration Constraints from Systems Software

Thorsten Berger

ITU Copenhagen

We present a static analysis technique and an empirical study on variability constraints in highly configurable systems software. The talk is based on a currently ongoing project that aims to understand the relationship between variability models (which are input to a configurator tool) and the source code (made variable using preprocessor directives and the build system) of highly configurable systems. The talk introduces the field of variability-aware code analyses, motivates our empirical study on variability constraints, describes our technique to extract such constraints from C code, and discusses preliminary results.

Highly configurable systems offer configuration options for tailoring software to specific needs. Values and combinations of configuration options are restricted by constraints, which may be hidden in the codebase, informally described in the documentation, or formally declared in a variability model. While the current practice of defining and evolving constraints largely requires manual effort, can we support these activities with automated techniques? Is it possible to recover variability constraints from existing code to re-engineer variability models, and to what extent?

To answer these questions, we present a technique to extract variability constraints from C code. We conceive and implement static analyses and study their effectiveness by comparing extracted constraints to those that are declared in variability models. We apply our approach to four large, highly configurable systems. We quantify the extent of recoverable constraints by our technique, and qualitatively investigate variability model constraints that are not recoverable from code. Our results show that the extraction of variability constraints from code is feasible and effective, but also that many domain-specific dependencies between configuration options solely reside in variability models. We believe that our approach, our tooling, and our experimental results can be used by researchers and practitioners working on variability model re-engineering, evolution, and consistency checking techniques.

(joint work with Sarah Nadi, Christian Kästner and Krzysztof Czarnecki)