Figure 5 - uploaded by Jan Gustafsson
Content may be subject to copyright.
Code with several infeasible paths main A,B,C,D,E,F  

Code with several infeasible paths main A,B,C,D,E,F  

Source publication
Conference Paper
Full-text available
Static worst-case execution time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for statically deriving safe and tight WCET bounds is information on the possible program flow through the program. Such flow information...

Context in source publication

Context 1
... example code in Figure 5 contains several types of infeasible paths. The program contains eight scopes; main, foo1, foo2 (the two calls to foo) and their cor- responding loop scopes, foo1 L and foo2 L, and bar and its two nested loops, bar L and bar L L, as shown in Figure 6. ...

Similar publications

Article
Full-text available
Real-time understanding of surrounding environment is an essential yet challenging task for autonomous driving system. The system must not only deliver accurate result but also low latency performance. In this paper, we focus on the task of fast-and-accurate semantic segmentation. An efficient and powerful deep neural network termed as Driving Segm...
Article
Full-text available
The author proposes to develop special models that are built based on finite-state machine, Mealy, in order to display information about technological processes at railway stations. The input alphabet of such machines is represented by the real signals (from the floor equipment, the related information systems and the dispatch office personnel). Th...
Article
Full-text available
Guaranteed response time is one of the important issues encountered in designing a real-time system. This problem has been studied with a new view by the AI community, which so far has proposed different paradigms. Anytime algorithms, Approximate processing, Design-to-time Scheduling and Progressive Reasoning are the most popular. All of them rely...
Article
Full-text available
Low-level communication protocols and their timing behavior are essential to developing wireless sensor networks (WSNs) able to provide the support and operating guarantees required by many current real-time applications. Nevertheless, this aspect still remains an issue in the state-of-the-art. In this paper we provide a detailed analysis of a rece...
Article
Full-text available
Structures with inverse model represent one of the successful solutions for the real-time control of the nonlinear processes. The use of these structures imposes solving some specific problems, like determination of static characteristic of the process, construction of inverse model or robust control law design. The paper proposes a structure and t...

Citations

... The analysis took 215 seconds and gave a worst-case bound of 5.7 · 10 6 cycles, which is 40 times higher than the actual WCET. The WCET bound generated by aiT could be improved somewhat by specifying more detailed flow facts [31], but this often requires extensive work by the user, and, still, will never reach the tight WCET provided by the proposed method. Note, however, that aiT can be applied to generic programs while ours is specialized for programs that realize Algorithm 1. Figure 5 exemplify how a classical measurement-based approach would sample the parameter space if as many measurements (i.e., solved QPs) are allowed as in the proposed method. ...
Preprint
We propose the first method that determines the exact worst-case execution time (WCET) for implicit linear model predictive control (MPC). Such WCET bounds are imperative when MPC is used in real time to control safety-critical systems. The proposed method applies when the quadratic programming solver in the MPC controller belongs to a family of well-established active-set solvers. For such solvers, we leverage a previously proposed complexity certification framework to generate a finite set of archetypal optimization problems; we prove that these archetypal problems form an execution-time equivalent cover of all possible problems; that is, that they capture the execution time for solving any possible optimization problem that can be encountered online. Hence, by solving just these archetypal problems on the hardware on which the MPC is to be deployed, and by recording the execution times, we obtain the exact WCET. In addition to providing formal proofs of the methods efficacy, we validate the method on an MPC example where an inverted pendulum on a cart is stabilized. The experiments highlight the following advantages compared with classical WCET methods: (i) in contrast to classical static methods, our method gives the exact WCET; (ii) in contrast to classical measurement-based methods, our method guarantees a correct WCET estimate and requires fewer measurements on the hardware.
... Proposals that compute WCET by analyzing the program AST bottom-up [42,43] are are efficient, since they do not analyze the same program fragment twice, but they usually lead to large overapproximation due to the inability of distinguishing among different execution contexts: e.g., call sites of a function. Approaches based on the implicit path enumeration technique (IPET) can account for execution contexts by allowing more fine-grained encoding of execution paths in an integer linear programming (ILP) formula, but the automatically generated [44] or user-supplied constraints may be too weak and include infeasible paths. Instead, symbolic execution can precisely enumerate the execution paths. ...
Preprint
We propose a proof-producing symbolic execution for verification of machine-level programs. The analysis is based on a set of core inference rules that are designed to give control over the tradeoff between preservation of precision and the introduction of overapproximation to make the application to real world code useful and tractable. We integrate our symbolic execution in a binary analysis platform that features a low-level intermediate language enabling the application of analyses to many different processor architectures. The overall framework is implemented in the theorem prover HOL4 to be able to obtain highly trustworthy verification results. We demonstrate our approach to establish sound execution time bounds for a control loop program implemented for an ARM Cortex-M0 processor.
... Assuming that such information is provided, we developed a technique that is capable of determining the upper bound of any loop. This technique is the result of combining static program analysis to determine the variables' values at each program point [44], and abstract execution to automatically derive loop bounds [45]. Figure 5 expresses the result of our technique in Example 1 at every program point until the first iteration of the loop concludes. ...
Article
Full-text available
Optimizing software to become (more) energy efficient is an important concern for the software industry. Although several techniques have been proposed to measure energy consumption within software engineering, little work has specifically addressed Software Product Lines (SPLs). SPLs are a widely used software development approach, where the core concept is to study the systematic development of products that can be deployed in a variable way, e.g., to include different features for different clients. The traditional approach for measuring energy consumption in SPLs is to generate and individually measure all products, which, given their large number, is impractical. We present a technique, implemented in a tool, to statically estimate the worst-case energy consumption for SPLs. The goal is to reason about energy consumption in all products of a SPL, without having to individually analyze each product. Our technique combines static analysis and worst-case prediction with energy consumption analysis, in order to analyze products in a feature-sensitive manner: a feature that is used in several products is analyzed only once, while the energy consumption is estimated once per product. This paper describes not only our previous work on worst-case prediction, for comprehensibility, but also a significant extension of such work. This extension has been realized in two different axis: firstly, we incorporated in our methodology a simulated annealing algorithm to improve our worst-case energy consumption estimation. Secondly, we evaluated our new approach in four real-world SPLs, containing a total of 99 software products. Our new results show that our technique is able to estimate the worst-case energy consumption with a mean error percentage of 17.3% and standard deviation of 11.2%.
... Many techniques focus on the worst-case execution time (WCET) analysis with a strong focus on real-time systems [138,139,144,154]. In order to manage the analysis the typical approach is to limit the analysis of loops to finite bounds while estimating the worst-case execution time for the system. ...
Thesis
Full-text available
Differentielles Testen ist ein wichtiger Bestandteil der Qualitätssicherung von Software, mit dem Ziel Testeingaben zu generieren, die Unterschiede im Verhalten der Software deutlich machen. Solche Unterschiede können zwischen zwei Ausführungspfaden (1) in unterschiedlichen Programmversionen, aber auch (2) im selben Programm auftreten. In dem ersten Fall werden unterschiedliche Programmversionen mit der gleichen Eingabe untersucht, während bei dem zweiten Fall das gleiche Programm mit unterschiedlichen Eingaben analysiert wird. Die Regressionsanalyse, die Side-Channel Analyse, das Maximieren der Ausführungskosten eines Programms und die Robustheitsanalyse von Neuralen Netzwerken sind typische Beispiele für differentielle Softwareanalysen. Eine besondere Herausforderung liegt in der effizienten Analyse von mehreren Programmpfaden (auch über mehrere Programmvarianten hinweg). Die existierenden Ansätze sind dabei meist nicht (spezifisch) dafür konstruiert, unterschiedliches Verhalten präzise hervorzurufen oder sind auf einen Teil des Suchraums limitiert. Diese Arbeit führt das Konzept des hybriden differentiellen Software Testens (HyDiff) ein: eine hybride Analysetechnik für die Generierung von Eingaben zur Erkennung von semantischen Unterschieden in Software. HyDiff besteht aus zwei parallel laufenden Komponenten: (1) einem such-basierten Ansatz, der effizient Eingaben generiert und (2) einer systematischen Analyse, die auch komplexes Programmverhalten erreichen kann. Die such-basierte Komponente verwendet Fuzzing geleitet durch differentielle Heuristiken. Die systematische Analyse basiert auf Dynamic Symbolic Execution, das konkrete Eingaben bei der Analyse integrieren kann. HyDiff wird anhand mehrerer Experimente evaluiert, die in spezifischen Anwendungen im Bereich des differentiellen Testens ausgeführt werden. Die Resultate zeigen eine effektive Generierung von Testeingaben durch HyDiff, wobei es sich signifikant besser als die einzelnen Komponenten verhält.
... Given a good understanding of C Lo and C H i based purely on measurements, the next stage is to recognise that engineers may wish to feed the data into hybrid analysis. A key issue with the results of hybrid analysis is the potential pessimism caused by infeasible paths [14]. An infeasible path is de ned as a path containing groups of basic blocks that cannot be executed a er another group of basic blocks has executed. ...
... Previous works to determine infeasible paths by static analysis, e.g. [14] place signi cant restrictions on developers such as the use of bespoke compilers. e two conditions for infeasible paths are as follows: ...
... In the example, the constant SAMPLES is an upper bound for the nodes inside the loop's body. This constant is necessary for the loop's related flow constraint: Such a loop bound must have a statically known value, which is ideally provided by the loop-bound analysis [65,104]. If this analysis is not able to find an upper bound, the user of the static analyzer is required to determine an upper bound with the help of application-specific knowledge. ...
Thesis
Full-text available
The increasing number of embedded systems spawns applications with critical constraints in both execution time and energy consumption. For their reliable operation, these energy-constrained real-time systems require bounds of their tasks' execution time as well as energy consumption in order to guarantee the completion within given resource budgets. While traditional worst-case program-code-analysis tools perform well in determining the worst-case execution time of tasks, they are not directly applicable to the problem of energy consumption: For the determination of energy-consumption bounds, it is insufficient to only consider real-time scheduling priorities, because any task can temporarily activate devices (e.g., transceivers) and thereby contribute to the whole system's power demand. This power demand, in turn, influences the energy consumption of all tasks in the system, irrespective of real-time priorities. Additional to the missing approaches for energy-consumption bounds, static worst-case analyzers, in general, come with the fundamental problem of unknown accuracies in the reported worst-case bounds. Since the actual worst case is not available from any benchmark program, it is impossible to compare the analyzer's reported bound and assess its pessimism, which inherently prevents comprehensive evaluations and validations of analysis techniques. This thesis addresses these problems by first presenting an approach to determining energy-consumption bounds while accounting for temporarily active devices, the fixed-priority real-time scheduling, synchronous task activations, and asynchronous interrupts. Since the analysis approach context-sensitively includes all possible interferences, it eventually determines the worst-case response energy consumption (WCRE) of tasks. The approach initially decomposes the target system under consideration of scheduling- and energy-relevant events. While relying on the decomposed representation, the approach then explores all possible system-wide program paths. Knowledge of these explored paths eventually allows determining WCRE bounds with means of sound problem formulations. To address the problem of the bounds' analysis pessimism, this thesis introduces a novel approach to assessing the accuracy of analyzers based on automatically generated benchmarks. This benchmark-generation algorithm combines small program patterns in a way that the worst case is available together with the woven benchmark. Knowledge of the generated benchmark's actual worst case then serves as a baseline for comprehensive evaluations and validations. The worst-case analysis approaches and their validation are the foundation to enable safe schedules. To complete the necessary components for reliably operating energy-constrained real-time systems, this thesis presents an operating-system kernel that utilizes worst-case resource bounds for time and energy. The kernel's scheduling approach dynamically reacts to scenarios where one resource becomes more critical than the other. This approach is aware of said analysis pessimism and effectively makes use of it while still guaranteeing the execution of critical tasks within statically determined time and energy bounds.
... Providing too large bounds leads to a large overestimation, and too small bounds may yield an unsafe estimate. As a result, providing safe and tight bounds has become a research field on its own with a wide range of different approaches, e.g., using abstract execution [27], refinement invariants [25] and pattern matching [30]. (2) Existing approaches predominantly implement their analyses at machine code level, where the high-level information from the original program is hard to extract. ...
Article
Full-text available
Estimating the Worst-Case Execution Time (WCET) of an application is an essential task in the context of developing real-time or safety-critical software, but it is also a complex and error-prone process. Conventional approaches require at least some manual inputs from the user, such as loop bounds and infeasible path information, which are hard to obtain and can lead to unsafe results if they are incorrect. This is aggravated by the lack of a comprehensive explanation of the WCET estimate, i.e., a specific trace showing how WCET was reached. It is therefore hard to spot incorrect inputs and hard to improve the worst-case timing of the application. Meanwhile, modern processors have reached a complexity that refutes analysis and puts more and more burden on the practitioner. In this article we show how all of these issues can be significantly mitigated or even solved, if we use processors that are amenable to WCET analysis. We define and identify such processors, and then we propose an automated tool set which estimates a precise WCET without unsafe manual inputs, and also reconstructs a maximum-detail view of the WCET path that can be examined in a debugger environment. Our approach is based on Model Checking, which however is known to scale badly with growing application size. We address this issue by shifting the analysis to source code level, where source code transformations can be applied that retain the timing behavior, but reduce the complexity. Our experiments show that fast and precise estimates can be achieved with Model Checking, that its scalability can even exceed current approaches, and that new opportunities arise in the context of "timing debugging".
... However, the price which had been paid ever since, is that of a harder analysis and overestimation. Semantic properties, such as type and range information of variables, are obfuscated or "compiled away", and need to be reconstructed to obtain precise estimates [4,13,18]. Today, despite its advantages, source-level WCET analysis, is rarely applied due to this mapping problem. However, when it is applied, timing annotations are generated by tools that reverse-engineer the transformations of a specific compiler version [2,17], or even require compiler extensions [24]. ...
... with line and disc yielding only the line number respectively discriminator of a debug location, l min (v), l max (v) being these locations as defined in Sec. 2.3, and D i (l ) is the discriminator label from Eq. (13). ...
Conference Paper
In this paper we discuss the problem of relating machine instructions to source level constructs, and how it has been addressed in the domains of Virtual Prototyping (VP) and Worst-Case Execution Time (WCET) analysis. It has been handled in different ways, although the goals and requirements between both domains are not far from another. This paper shows that there exists a mutual benefit in exchanging solutions between the two research domains, by demonstrating the applicability and utility of VP methods for WCET analysis, and highlighting their shortcomings. After an evaluation of existing methods, we carefully rework and combine them to a sound and generic mapping algorithm for source-level WCET analysis. As a result, we obtain WCET estimates that outperform classic binary analyzers especially under moderate compiler optimization. Our approach is based on hierarchical flow matching, control-dependency- and dominator-homomorphic maps, and dominator lumping to soundly fill the gaps in the mapping. WCET estimation is performed using Model Checking, which maximally exploits the information available in the source, and highlights remaining weaknesses in the mapping methods. Last but not least, we discuss further chances of synergy between both research communities which could enable support for more complex microarchitectures with caches, pipelines and speculative execution in both source-level WCET analysis and VP.
... To generalize such a semantic solution to nested loops, one comes across the very hard problem of computing a semantic summary of the functionality of the inner loop nest, to be used in the analysis of the outer loop. Despite big strides in program analysis techniques [10,19], this type of semantic summary computation remains limited to classes of loops whose invariants (summaries) are within decidable theories, and even then, mostly proof-driven rather than summarizing full functionality. ...
Preprint
We propose a methodology for automatic generation of divide-and-conquer parallel implementations of sequential nested loops. We focus on a class of loops that traverse read-only multidimensional collections (lists or arrays) and compute a function over these collections. Our approach is modular, in that, the inner loop nest is abstracted away to produce a simpler loop nest for parallelization. Then, the summarized version of the loop nest is parallelized. The main challenge addressed by this paper is that to perform the code transformations necessary in each step, the loop nest may have to be augmented (automatically) with extra computation to make possible the abstraction and/or the parallelization tasks. We present theoretical results to justify the correctness of our modular approach, and algorithmic solutions for automation. Experimental results demonstrate that our approach can parallelize highly non-trivial loop nests efficiently.
... For the static analysis approach, it includes three main classes: structure-based, path-based, and techniques using implicit path enumeration (IPET). Both the path-based approach [9] and the IPET [10] are limited to consider the OS in the execution process of tasks. The structure-based approach analyzes the time by traversal of the syntax tree of tasks in consideration of the OS [11]. ...
... For the path-based approach [9], the execution time is determined by analyzing the paths in the task. For the IPET [10], the control flow and the basic-block execution time are combined into the constraints to analyze the execution time. The above two approaches are limited to considering the OS functions in the execution process of a task. ...
Article
Full-text available
Currently, more and more application-specific operating systems (ASOS) are applied in the domain of real-time embedded systems (RTES). With the development of microkernel technique, the ASOS is usually customized based on a microkernel using the configurable policy. Evaluating the timing requirements of a RTES based on the ASOS is helpful to guide the designer towards the choice of the most appropriate configuration. Modeling and analyzing the time requirements for such system in the early design stage are essential to avoid redesigning or recoding the system at a later stage. However, the existing works are insufficient to support the modeling for both the specific domain of microkernel-based RTES and the variability of the configurable policy, as well as a general analysis for the various configurations. To solve these problems, this paper presents a modeling and timing analysis framework (MTAF) for the microkernel-based RTES. Our main contributions are twofold: (1) proposing a domain-specific language (DSL) for the timing analysis modeling of the microkernel-based RTES; then, we define and implement this DSL as a UML profile. (2) proposing a static timing analysis approach for the RTES design modeled by the DSL, where a timing analysis tree and uniform execution rules are defined to analyze the variability in a general way. In the case study, we take the scheduling policy as an example to show the use of our framework on a real-life robot controller system.