DANSAS 2009, Odense, August 20
|09:00 - 09:30||Coffe and breakfast rolls|
|09:30 - 09:40||Welcome|
|09:40 - 10:30||Context-bounded Model Checking of Concurrent Software (invited talk)|
|10:30 - 11:00||Fruit Break|
|11:00 - 11:30||Semantic Patch Inference|
|12:00 - 13:15||Lunch|
|13:15 - 13:45||Static Analysis for Dynamically Changing Hardware|
Ulrik Pagh Schultz
|13:45 - 14:15||Modular WCET Analysis of ARM-processors|
Mads Chr. Olesen
|14:15 - 14:45||Coffee Break|
|14:45 - 15:15||Automatic Complexity Analysis Revisited|
|15:15 - 15:45||Automated Termination Analysis of Programs using Term Rewriting|
|15:45 - 16:00||Wrap-Up, Evaluation, and Farewell|
Context-bounded Model Checking of Concurrent Software
Technical University Dortmund / Fraunhofer Institute for Software- and Systems Engineering
The reachability problem for concurrent boolean programs is undecidable. However, Shaz Qadeer and I proved (TACAS 2005) that the context-bounded reachability problem for concurrent pushdown systems is decidable, even in the presence of dynamic thread creation. Such systems can model arbitrary concurrent systems with unbounded recursion computing over bounded data domains. Context bounded reachability asks if a state is reachable within a bounded number of context-switches. The theoretical challenge for decidability is that (even a bounded number of) context switches could take place in infinitely many configurations of the system.
The decidability result has given rise to a new model checking strategy for concurrent programs called context bounded model checking (CBMC). With this strategy, it is possible to find all assertion failures in concurrent boolean programs that can manifest within k or less thread interleavings, for any bound k. Thus, the strategy is sound up to the bound k.
In my talk I will give practical motivation and explain the theory behind CBMC. I will then try to survey some of the many later and current developments that have followed on to the CBMC decidability result.
Semantic Patch Inference
Department of Computer Science, University of Copenhagen
A key issue in maintaining Linux device drivers is the need to update drivers in response to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes.
We present a tool, spdiff, that identifies common changes made in a set of pairs of files and their updated versions, and extracts a semantic patch performing those changes. Library developers can use our tool to extract a semantic patch based on the result of manually updating a few typical driver files, and then apply this semantic patch to other drivers. Driver developers can use it to extract an abstract representation of the set of changes that others have made.
(Joint work with David Lo, Siau Cheng Khoo, and Julia L. Lawall)
Department of Computer Science, Aarhus University
(Joint work with Simon Holm Jensen and Peter Thiemann)
Static Analysis for Dynamically Changing Hardware
Ulrik Pagh Schultz
Mærsk Mc-Kinney Møller Institute, University of Southern Denmark
A reconfigurable robot is a robot that can change shape. Reconfigurable robots are typically built from multiple modules that, in the case of self-reconfiguration, can manipulate each other to change the shape of the robot. Changing the physical shape of a robot allows it to be adapted to various tasks, for example by changing from a car configuration (best suited for flat terrain) to a snake configuration suitable for other kinds of terrain. Programming reconfigurable robots is however complicated by the need to adapt the behavior of each of the individual modules to the overall physical shape of the robot. Moreover, programming modular robots requires careful coordination of the actions of the individual modules to achieve an overall behavior, and in practice involves many difficulties: real-world modules typically have very limited computational resources, may have partial failures in their neighbor-to-neighbor communication abilities, and may spuriously fail during operations.
Recent work in the Modular Robotics Lab at the University of Southern Denmark includes the development of a virtual machine and domain-specific language for programming modular, self-reconfigurable robots. This approach allows the programmer to declaratively control how code is dynamically deployed and activated throughout the structure of the robot, and provides a high-level means of specifying the actions of and interactions between individual modules. A novel feature of this language is that the set of actions used to perform self-reconfiguration constitutes a reversible subset of the language, allowing self-reconfiguration sequences (e.g., changing shape from a car to a snake) to be automatically reversed (thus changing the robot back to the initial shape). The current implementation of the language however provides very few static guarantees to the programmer.
This talk will provide an introduction to reconfigurable robots and survey our current work-in-progress on high-level programming languages for reconfigurable robots. I will highlight concrete challenges that could be solved using static analysis as well as provide a general discussion on what issues are primarily of interest for the modular robotics community.
Modular WCET Analysis of ARM-processors
Mads Chr. Olesen
Department of Computer Science, Aalborg University
The ability to determine safe and sharp worst-case execution time (WCET) for processes is very important when scheduling (hard) real-time systems, as it influences the reliability and efficiency of the resulting systems. We present METAMOC, a flexible WCET analysis method based on model checking and static analysis that determines safe and sharp WCETs for processes running on hardware platforms featuring caching and pipelining.
In METAMOC the application binary is disassembled, and the control flow reconstructed. Based on this, a timed automaton model of the binary is constructed. The model is annotated with information, extracted from a value analysis of the binary, finding a safe over approximation of memory access addresses. Our value analysis is based on a loop-unrolling constant propagation analysis, implemented in the framework of Weighted Push-Down Systems, and as such is able to perform a precise inter-procedural analysis.
The model of the binary is then combined with a model of the processor's pipeline and caches, and model checked using UPPAAL in order to determine the WCET. Experiments on the Mälerdalen WCET benchmarks programs show that the principles used in the method work very well, and that taking especially caching into account is worth the effort.
(Joint work with Martin Toft, Andreas E. Dalsgaard, Kim Guldstrand Larsen, and Rene Rydhof Hansen)
Automatic Complexity Analysis Revisited
Department of Communication, Business and Information Technologies, Roskilde University
The aim of automatic complexity analysis is to construct a complexity measure for programs directly from the source program. The complexity measure is an expression that gives an upper bound of the execution time of the program given the size of input. Ben Wegbreit [Wegbreit 1975] gave the first major presentation of techniques for performing automatic complexity analysis. His approach was based on program transformations and he was also able to analyze average and minimum time consumption. In a previous work [Rosendahl 1989] we presented an automatic complexity analysis for a first order functional language based on abstract interpretation. Over the last decade a number of researchers have presented techniques for performing automatic complexity analysis for a number of different languages.
We reexamine the automatic complexity analysis technique and show that it may be used as a common framework for analysis for programs in different languages. We also show that it may be extended to other types of complexity measures, such as heap space usage and maximum stack space usage.
Automated Termination Analysis of Programs using Term Rewriting
Department of Mathematics and Computer Science, University of Southern Denmark
The question whether a given program terminates for all its inputs is one of the fundamental problems in program verification. Thus it has been researched quite thoroughly in the past and many techniques and tools have been developed, most notably in the term rewriting and the logic programming community.
However, until very recently, hardly any of these techniques could be used for real programming languages. Instead of starting from scratch and developing completely new techniques, we want to take advantage of existing powerful tools for the automated termination analysis of term rewrite systems (TRSs).
In this talk, we describe recent results which permit the application of existing techniques from term rewriting in order to prove termination of programs.
We discuss two possible approaches:
- Translate programs into TRSs and use existing tools to verify termination of the resulting TRSs.
- Adapt TRS-techniques to the respective programming languages in order to analyze programs directly.
We present such approaches for the functional language Haskell and the logic language Prolog. Our results have been implemented in the termination provers AProVE and Polytool. Our resulting termination analyzers are currently the most powerful ones for both Haskell and Prolog.
(Joint work with Danny De Schreye, Jürgen Giesl, Manh Thang Nguyen, Alexander Serebrenik, Stephan Swiderski, and René Thiemann)