DANSAS 2010, Odense, August 19
|09:00 - 09:30||Coffee and breakfast rolls|
|09:30 - 09:40||Welcome|
|09:40 - 10:25||Static Analysis of Parallel Languages (invited talk)|
|10:30 - 11:00||Fruit Break|
|11:00 - 11:25||Static Output Analysis for Java Servlets and JSP|
|11:35 - 12:00||Types for the psi-calculus|
|12:05 - 13:35||Lunch|
|13:35 - 14:00||Abstract Interpretation of Temporal Logic: Abstract Model Checking Revisited|
|14:10 - 14:35||Extending Dylan's Type System for Better Type Inference and Error Detection|
|14:45 - 15:10||Typed and Unambiguous Pattern Matching on Strings using Regular Expressions|
Jakob G. Thomsen
|15:15 - 15:45||Coffee Break|
|15:45 - 16:10||Tool-supported Refinement-centric Analysis, Design, Development, and Verification|
|16:20 - 16:45||Lattice Automata for Program Analysis|
Mads Chr. Olesen
|16:50 - 17:00||Wrap-Up|
Static Analysis of Parallel Languages (invited talk)
We present a core calculus with two of X10's key constructs for parallelism, namely async and finish. Our calculus forms a convenient basis for type systems and static analyses for languages with async-finish parallelism, and for tractable proofs of correctness. For example, we give a one-page proof of the deadlock-freedom theorem of Saraswat and Jagadeesan. Our main contribution is a type system that solves the open problem of context-sensitive may-happen-in-parallel analysis for languages with async-finish parallelism. We prove the correctness of our type system and we report experimental results of performing type inference on 13,000 lines of X10 code. Our analysis runs in polynomial time and produces a low number of false positives, which suggests that our analysis is a good basis for other analyses such as race detectors.
(Joint work with Jonathan Lee; presented at PPOPP 2010)
About speaker: Jens Palsberg is a Professor of Computer Science at UCLA. His research interests span the areas of compilers, embedded systems, programming languages, software engineering, and information security. Starting in Summer 2010, he is the editor-in-chief of ACM Transactions of Programming Languages and Systems. He is also a member of the editorial board of Information and Computation, and a former member of the editorial board of IEEE Transactions on Software Engineering, and he has served as program chair of POPL, SAS, TACAS, and EMSOFT.
Static Output Analysis for Java Servlets and JSP
Java Servlets and JSP consitute a widely popular platform for writing web applications. Similarly to most such frameworks, no guarantees are given for the well-formedness or validity of the output generated by such applications. The programmer is therefore responsible for ensuring correctness, and this task is often carried out by limited testing of output samples resulting in many undetected errors.
The talk presents a program analysis that conservatively approximates and verifies the output from such web applications. The approach builds on a collection of program analysis techniques developed earlier in the JWIG and XACT projects. The formal description of the output language of a web application furthermore enables inter-servlet control flow analysis that can be used to ensure consistency of for instance form fields and parameter names.
(Joint work with Anders Møller and Christian Kirkegaard)
Types for the psi-calculus
The psi-calculus was introduced by Bengtson et al. as a generalization of the many variants of the pi-calculus, and it has been shown that many existing process calculi can be captured as instances of this general framework.
In this paper we describe a simple type system for the psi-calculus. The type system satisfies a simple subject reduction property and we introduce a general notion of safety. We show that a number of existing type systems can be easily captured in our setting. We give a type system (the first of its kind) for the calculus of explicit fusions by Wischik and Gardner, another for the distributed pi-calculus of Hennessy and Riely and finally show how existing type systems for secrecy and authenticity in the spi calculus can be represented.
Abstract Interpretation of Temporal Logic: Abstract Model Checking Revisited
We present the semantics of the propositional µ-calculus, a logic which subsumes several other temporal logics for reasoning about reactive systems modelled as Kripke structures. The semantics gives an evaluation of a given µ-calculus formula with respect to a given Kripke structure, returning the set of states of the structure where the formula holds. The framework of abstract interpretation is then applied to obtain abstract semantics based on a Galois connection between the powerset of states of the Kripke structure and some abstract domain. As usual, the abstract semantics yields (a description of) an over-approximation, with respect to the subset order, of the set of states where the formula holds.
We review some previous research and an implementation in which we show that this approach yields a practical and arguably simpler solution to the abstract model checking problem for arbitrary temporal formulas. Most previous work in this area uses mixed or modal transition systems. In the second half of the talk we argue that modal transition systems are generalised by the abstract interpretation framework. Given a Galois connection with adjoint functions alpha and gamma we apply a standard construction to obtain a dual connection between the same domains, but with the orderings reversed. That is, the dual abstract semantics evaluates a µ-calculus formula with respect to a given Kripke structure, obtaining an over-approximation with respect to the superset order, which we may also view as an under-approximation with respect to the subset order.
The two semantics together give what has been called "possible" and "necessary" semantics. In the case where the abstract domain is a partition of the states of the Kripke structure, as is the case in abstract model checking, the dual semantics can be re-interpreted as evaluation over a modal transition system with "must" and "may" transitions between the abstract states. However, the abstract interpretation approach also applies to abstractions that are not partitions.
(Joint work with Gourinath Banda)
Extending Dylan's Type System for Better Type Inference and Error Detection
IT University of Copenhagen
Dynamically and statically typed programming languages have different features. Statically typed languages provide early type error detection and optimization possibilities due to type inference. Dynamically typed ones provide dynamism and enable rapid prototyping. Gradual Typing (Jeremy Siek 2006) integrates the dynamic type (known at runtime only) into the typed lambda calculus. This talk is about integration of static features into Dylan, a dynamically typed programming language related to LISP with automated memory management, object-centered and higher-order functions. As in LISP, meta-programming facilities are available. In contrast to common object-oriented programming languages, methods are not defined inside classes, but alongside classes. Thus both methods and classes are first-level citizens, this concept is known as generic functions. Selection of which method is called is determined by taking all runtime types of all arguments into account, named multiple dispatch.
I integrated gradual typing and parametric polymorphism into Dylan. Dylan provides special challenges for gradual typing, multiple return values, variable arity methods, generic functions. My thesis extends the type system and implemented it in the mainstream Dylan compiler, together with an improved type inference. The result is that more dispatches are done at compile time, a witness is the Dylan standard library.
A live visualization of the control and data flow was developed as a powerful tool to comprehend the workflow of the compiler and to identify and avoid bugs during compilation. The visualization helps to understand why the compiler cannot track type correctness of certain expressions and why the compiler cannot optimize certain expressions.
Typed and Unambiguous Pattern Matching on Strings using Regular Expressions
Jakob G. Thomsen
We show how to achieve typed and unambiguous declarative pattern matching on strings using regular expressions extended with a simple recording operator.
We give a characterization of ambiguity of regular expressions that leads to a sound and complete static analysis. The analysis is capable of pinpointing all ambiguities in terms of the structure of the regular expression and report shortest ambiguous strings. We also show how pattern matching can be integrated into statically typed programming languages for deconstructing strings and reproducing typed and structured values.
We validate our approach by giving a full implementation of the approach presented in this paper. The resulting tool, reg-exp-rec, adds typed and unambiguous pattern matching to Java in a standalone and non-intrusive manner. We evaluate the approach using several realistic examples.
(Joint work with Claus Brabrand)
Tool-supported Refinement-centric Analysis, Design, Development, and Verification
IT University of Copenhagen
Developers, first and foremost, like to write code. Consequently, most software systems have little source documentation, and high-level descriptions, particularly those that focus on architecture, features, and requirements, are limited to README files (if we are lucky). Many modeling languages, of which UML is the most egregious example, have been invented over the years, but few have gained traction outside of the realm of large-scale industry projects where pictures matter more than semantics. In this talk I will introduce the EBON tool suite. EBON includes a modeling language with an airtight semantics, textual and graphical notations, first-class support for features, requirements, and architecture descriptions, user-level extensibility, and quality tool support. Source-level documentation witnessed a quality bump with the widespread adoption of structured documentation tools like Javadoc in the late 90s. Our hypothesis is that tools like EBON will catalyze the ad option of high-level specification in much the same fashion. Oh, and of course, consistency checking between all of these artifacts is accomplished with static analysis...
Lattice Automata for Program Analysis
Mads Chr. Olesen
Lattice automata provide a generalisation of, among others, the monotone framework, timed automata and boolean programs. Having a common formalism allows efficient tools to be developed for all applicable areas, as well as technology transfer from one area to another. One such example is how the Counter-Example Guided Abstraction Refinement method from model checking can be used to refine an abstraction if it is too coarse to prove the property in question.
The different areas of application will be briefly outlined, as well as how efficient tools can be implemented.
Focus will be on how lattice automata can be used for static analysis alá. the monotone framework, and how the CEGAR approach can be used to refine those results if they are not precise enough.
(Joint work with René Rydhof Hansen, Kim Guldstrand Larsen, Jiri Srba, Andreas Dalsgaard and Petur Olsen)