Article

A General-Purpose Algorithm for Analyzing Concurrent Programs

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Developing and verifying concurrent programs presents several problems. A static analysis algorithm is presented here that addresses the following problems: how processes are synchronized, what determines when programs are run in parallel, and how errors are detected in the synchronization structure. Though the research focuses on Ada, the results can be applied to other concurrent programming languages such as CSP.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... These concepts were later generalized to discrete state transition systems in [CC84] by the same authors. Static analysis of the synchronization structure of concurrent programs was first considered by Taylor [Tay83a]. He presents an algorithm which can approximate which parts of a synchronized program may run in parallel to each other, similar to what we have done in Algorithm 5. ...
... Again, we can only present fragments of this field of research due to the amount of literature published on it. To precisely capture the effects of parallel programs, like in the work of Taylor [Tay83a], a Parallel Execution Graph (PEG) is needed. As an example, this is demonstrated for the case of Message Passing Interface (MPI)-Analysis in [GKS+11]. ...
... In addition, we can also use them to prune the PEG as we have done in Section 5.5.5, since a task which is waiting for synchronization cannot progress until a partner has arrived to complete the rendez-vous. This idea has already been used in [Tay83a] and similar to there, it can be used on top of the timing information to further prune the PEG. ...
Thesis
Full-text available
During the design of safety-critical real-time systems, developers must be able to verify that a system shows a timely reaction to external events. To achieve this, the Worst-Case Execution Time (WCET) of each task in such a system must be determined. The WCET is used in the schedulability analysis in order to verify that all tasks will meet their deadlines and to verify the overall timing of the system. Unfortunately, the execution time of a task depends on the task’s input values, the initial system state, the preemptions due to tasks executing on the same core and on the interference due to tasks executing in parallel on other cores. These dependencies render it close to impossible to cover every feasible timing behavior in measurements. It is preferable to create a static analysis which determines the WCET based on a safe mathematical model. The static WCET analysis tools which are currently available are restricted to a single task running uninterruptedly on a single-core system. There are also extensions of these tools which can capture the effects of multi-tasking, i.e., preemptions by higher-priority tasks, on the WCET for certain well-defined scenarios. These tools are nowadays already used to verify industrial real-time software, e.g., in the automotive and avionics domain. Up to now, there are no mature tools which can handle the case of parallel tasks on a multi-core platform, where the tasks potentially interfere with each other. This dissertation presents multiple approaches towards a WCET analysis for different types of multi-core systems. They are based upon previous work on the modeling of hardware and program behavior but extend it to the treatment of shared resources like shared caches and shared buses. We present multiple methods of integrating shared bus analysis into the classical WCET analysis framework and show that time-triggered bus arbitration policies can be efficiently analyzed with high precision. In order to get precise WCET estimations for the case of shared caches, we present an efficient analysis of interactions in parallel systems which utilizes timing information to cut down the search space. All of the analyses were implemented in a research C compiler. Extensive evaluations on real-time benchmarks show that they are up to 11:96 times more precise than previous approaches. Finally, we present two compiler optimizations which are tailored towards the optimization of the WCET of tasks in multi-core systems, namely an evolutionary optimization of shared resource schedules and an instruction scheduling which uses WCET analysis results to optimally place shared resource requests of individual tasks. Experiments show that the two combined optimizations are able to achieve an average WCET reduction of 33%.
... Parallel program analysis Static analysis of the synchronization structure of concurrent programs was first considered by [17] where the analysis of the "concurrency state" of the system and the notion of a parallel execution graph was first established. We build our work on this, though the analysis in [17] worked at a far more coarse-grained level. ...
... Parallel program analysis Static analysis of the synchronization structure of concurrent programs was first considered by [17] where the analysis of the "concurrency state" of the system and the notion of a parallel execution graph was first established. We build our work on this, though the analysis in [17] worked at a far more coarse-grained level. A reference approach to bit-vectorbased abstract interpretation on programs with explicit fork-join parallelism is given in [9]. ...
... In addition we can also use them to prune the PEG as we have done in Section 4, since a task which is waiting for synchronization cannot progress until a partner has arrived to complete the rendez-vous. This idea has already been used in [17] and similar to there, it can be used on top of the timing information to further prune the PEG. ...
Conference Paper
Full-text available
In the verification of safety-critical real-time systems, the problem of determining the worst-case execution time (WCET) of a task is of utmost importance. Safe formal methods have been established for solving the single-task, single-core WCET problem. The de-facto standard approach uses abstract interpretation to derive basic block execution times and a combinatorial path analysis which derives the longest path through the program. WCET analyses for multi-core computers have extended this methodology by assuming that shared resources are partitioned in either time or space and that therefore each core can still be analyzed separately. For real-world multi-cores this assumption is often not true, making the classic WCET analysis approach either inapplicable or highly pessimistic. To overcome this, we present a new technique to explore the interleavings of a parallel task system as well as an exclusion criterion to prove that certain interleavings can never occur.We show how this technique can be integrated into existingWCET analysis approaches and finally provide results for the application of this new analysis type to a collection of real-time benchmarks, where average WCET reductions of 32% were observed.
... An analysis algorithm that addresses the tribulations like SYN in processes and error detection in the SYN structure, was proposed by R.N. Taylor [13]. The author addressed a method for constructing a tool to ensure that the system will never enter in an infinite wait and will also ensure the absence of any undesirable parallelism. ...
... The author addressed a method for constructing a tool to ensure that the system will never enter in an infinite wait and will also ensure the absence of any undesirable parallelism. Only static analysis was used in this work [13] with no properties of an actual Ada program, whereas later works [14,15] provided a run-time analysis and also dealt with an actual Ada program. For solving the problems involved in the deterministic execution of a concurrent Ada program, Richard Carver and K. C. Tai [14,15] described the language-based approach. ...
Article
Concurrent programs are replacing the sequential programs as they utilize the true capabilities of multicore architecture. The extensive use of multicore systems and multithreaded paradigms warrants more attention to the testing of the concurrent programs. The testing concurrent program is not a new field as it has been more than 40years because the first problem related to the testing concurrent program was addressed by the researchers. The field covers various domains, which include concurrency problems, testing approaches, techniques, graphical representations, tools, and subject systems. This paper aims at providing an overview of research in the domain of testing concurrent programs by classifying it into eight categories: (a) reachability testing, (b) structural testing, (c) model-based testing, (d) mutation-based testing, (e) slicing-based testing, (f) formal methods, (g) random testing, and (h) search-based testing. The survey is focused on the techniques applied, methodologies followed, and tools used in these aforementioned approaches. Furthermore, the gaps are also identified in different approaches. The paper concludes with the consolidation of various testing parameters along with the future directions.
... The control-flow graphs of individual processes are modified to highlight the synchronization structure, abstracting away other details. Subsequently the complete state-transition graph of the execution, known as the reachability graph, is constructed, thereby modeling the concurrent program as the set of all possible execution sequences [5,6]. Traditional reachability analysis suffers from combinatorial-explosion i.e., the number of states generated for analysis increases exponentially with the number of concurrent threads of execution. ...
... The practical utility of the apportioning technique can be seen from the following observations: The complexity (number of states generated) of traditional reachability analysis [5,6] is O(p) T , where T is the number of threads and p is the number of interactions for any thread. Extending such techniques to concurrent object-oriented programs by performing additional analysis for each class, results in a complexity of O(c(p l ) m + (p) T ), where c is the number of classes, m the number of methods in each class, and p l is the number of LAP in any method. ...
Article
Reachability analysis is an important and well-known tool for static analysis of critical properties in concurrent programs, such as freedom from deadlocks. Direct application of traditional reachability analysis to concurrent object-oriented programs has many problems, such as incomplete analysis for reusable classes (not safe) and increased computational complexity (not efficient). Apportioning is a technique that overcomes these limitations and enables safe and efficient reachability analysis of concurrent object-oriented programs. Apportioning is based upon a simple but powerful idea of classification of program analysis points as local (having influence within a class) and global (having possible influence outside a class). Given a program and a classification of its analysis points, reachability graphs are generated for: (i) an abstract version of each class in the program having only local analysis points and (ii) an abstract version of the whole program having only global analysis points. The error to be checked is decomposed into a number of sub-properties, which are checked in the appropriate reachability graphs. In this paper we present the development of ARA, an apportioning based tool for analysis of concurrent Java programs. Some of the main features of ARA are: varying the classification of analysis points, distributing the generation of reachability graphs over several machines, and the use of efficient data structures, to further reduce the time required for reachability analysis. We also present our experience with using ARA for the analysis of several programs.
... Concurrent programs often prove more challenging to verify than sequential ones, as the state space explodes easily, unless processes follow very closely what the others are doing or have completely decorrelated executions. Verification of such programs can be traced back to the work of Taylor [12], and has been the subject of a variety of approaches, which reflect the numerous possible modelisations of distributed systems. Looking for an error trace is typically Pspace-hard when processes are finite-state systems, i.e., the cost of exploring an exponential number of configurations. ...
Preprint
Full-text available
We study the verification of distributed systems where processes are finite automata with access to a shared pool of locks. We consider objectives that are boolean combinations of local regular constraints. We show that the problem, PSPACE-complete in general, falls in NP with the right assumptions on the system. We use restrictions on the number of locks a process can access and the order in which locks can be released. We provide tight complexity bounds, as well as a subcase of interest that can be solved in PTIME.
... The second approach consists of automatically extracting a model out of a software application by statically analyzing its code and abstracting away details, applying traditional model checking to analyze this abstract model, and then mapping abstract counterexamples (if any) back to the code. The investigation of this abstraction-based second approach can be traced back to early attempts to analyze concurrent programs written in concurrent programming languages such as Ada (e.g., [44,100,104,142]). Other relevant work includes static analyses geared towards analyzing communication patterns in concurrent programs (e.g., [43,46,147]). ...
Chapter
Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking. © Springer International Publishing AG, part of Springer Nature 2018. All rights reserved.
... Bristow et al. [16] build an inter-process precedence graph to indicate the synchronisation-imposed execution ordering among processes. Taylor [17] models a concurrency graph based on a reduced flow graph representation of every task. Recently, as parallel platforms become increasingly prevalent, a number of studies have appeared, introducing a variety of advanced techniques for discovering MHP statements in a program. ...
... CRA techniques were originally proposed to remedy the problem of traditional reachability analysis techniques [2,28,33] which compose the global system representation in a single step. Yeh [36] described several case studies which suggested similar performance between a technique of compositional reachability analysis and that of constraint expressions [3]. ...
... The operations (i.e. send/receive) use rendezvous-like synchronisation mechanisms similar to the ones found in Ada [62,63]. Thus, when a communication operation is taking place, and when the first of the two processes (let us say the sender) is ready for a rendezvous (i.e. with the receiver). ...
Article
Full-text available
Various approaches have been proposed to the problem of assembling X-machines (also known as Eilenberg X-machines) into a communication system. In this report, these approaches are presented and unified within the stand-alone X-machine notation. The models are analysed highlighting those aspects that seem to be more relevant for specifying distributed (testable) systems. From the testing perspective, it has been proved that the Holcombe-Ipate testing approach (SMXT), developed originally for stream X-machines, can be applied to some of these communicating systems. For one of the approaches, the CXM-system, the formalism needs to be modified if the testing method will be used and these modifications are discussed. Another of these models, the CSXMS, is surveyed and all its variations are studied in order to provide the necessary conditions for testing it. A different model, the CSXMS-c, that allows a synchronous mechanism for message-passing, is also analysed. The results of this show the correct implementation of the construct and the passing of messages. A methodology for building communicating X-machines from stand-alone X-machines is also included in this report. This methodology, the MSS, is approached here by means of a modified version of the multiple-stream X machines (M-SXM). These systems, the CM-SXMS, are defined in terms of a graph, where the vertices model the components, and the edges correspond to streams that are shared between them. It seems that the CSXMS-c, CSXMS and CM-SXMS can respectively model the distributed computing models of synchronous, semi-synchronous and asynchronous message-passing. Therefore, if the SXMT can be extended and applied to all of the communicating X-machine systems, then it could be possible to test distributed algorithms with different message-passing structures, but this will require future work Keywords: X-machines, communicating X-machines, communicating stream X-machines systems (CSXMS), CSXMS testing-variant, simple CSXMS, CSXMS with channels, modular specification of systems using communicating X-machines, communicating multiple-stream X-machines systems, formal specification, distributed systems, testing.
... Taylor [21] has developed an algorithm for statically analyzing the synchronous communica- Taylor's algorithm matches all possible synchronous communications for the programming language Ada [22]. The following is a discussion of Taylor's technique as modified (by us) to deal with communicating sequential processes (CSP) [23] . ...
... Because of the need to strike a balance between precision and efficiency, most flow analysis techniques are approximate methods. The use of flow graphs was first presented by Taylor [34]. The program flow graph, annotated with synchronization constraints, was used to generate a state-transition graph representing the concurrency history. ...
Article
Full-text available
We present a flow analysis technique for detecting unreachable states and actions in concurrent systems. It is an enhancement of the approach by Cheung and Kramer. Each process of a concurrent system is modeled as a finite state machine, whose states represent process execution states and whose transitions are labeled by actions. We construct dependency sets incrementally and eliminate spurious paths by checking the execution sequences of actions. We prove mathematically that our algorithm can detect more unreachability faults than the well-known Reif/Smolka and Cheung/Kramer algorithms. The algorithm is easy to manage and its complexity is still polynomial to the system size. Case studies on two commonly used communication protocols show that the technique is effective.
Article
Video games systems are known for their complexity, concurrency and non-determinism, which makes them prone to challenging tacit bugs. Video games development is costly and the corresponding verification process is tiresome. Testing the nondeterministic and concurrent behaviors of video games systems is not only crucial but also challenging, especially when the game state space is huge. Accordingly, typical software testing approaches are neither suitable nor effective to find related bugs. Novel automated approaches to support video game testing are needed. This problem has caught researchers’ attention recently. Approaches found in the literature have tried to address two sub problems: modeling and uncovering bugs. Colored Petri nets is known to support modeling and verifying concurrent and nondeterministic systems. Search approaches have been used in the literature to check the availability of faulty states through exploring state spaces. However, these approaches tend to lack adaptability to test different video games systems due to the limitations of the defined fitness functions, in addition to difficulties in searching huge state spaces due to exhaustive and unguided search. The availability of automated approaches that guide and direct the process of bugs finding is mandatory. Thus, in this study we address this problem as we present a solution for automated software testing using collaborative work of two genetic algorithms (i.e. co-evolutionary) agents, where our approach is applied to colored Petri nets representations of the software workflow. The results of our experiments have shown the potential of the proposed approach in effectively finding bugs automatically.
Article
A deadlock preventive strategy in a consumable resource (CR) single processor env~ronment and within the context of operating systems is proposed, aimed at preventing indefinite postponement/starvation of running processes. The crutch of the approach is to dynamically enforce the priorities of the running processes so as to rectify a nondesirable situation amenable to deadlock. When the running processes sharing the processor belong to different priority classes, then a priority preemption action is taken. When they belong to the same class, the corresponding processor-sharing rates are varied. This means that the response functional variations are “enforced” as desired. This is suggested to be indirectly done via “enforcing”the change in some functional parameters relevant to running processes known as “policy functions”. Policy functions are interrelated to response functions and priorities of running processes. The aim of the scheduling approach is to guarantee that the running processes are “speed-consistent/compatible” mostly during cr~t~cal ~ntervals of time. During these intervals, consumable resources may be exchanged among executing processes; that is, requested, granted, then released. Priorities of running processes are varied by varying the corresponding policy functions. Policy function parameters are tuned such that the resultant time responses lie within predetermined performance crest bounds. The overall scheduling approach is easy and strai~htforward, and inflicts negligible overhead losses on the system.
Article
A test oracle for a concurrent program is a method for checking whether an observed behavior of the program is consistent with the program's specification. Abstract specification models for message-passing concurrent programs are often expressed as, or can be translated into, a labeled transition system (LTS). Stateful techniques for generating test oracles from LTS specification models are often limited by the state explosion problem. In this paper, we present a stateless technique for generating global and local test oracles from LTS specification models. A global test oracle uses tests generated from a global LTS model of the complete system to verify a global implementation relation between the model of the system and its implementation. Global test oracles, however, may require too many test sequences to be executed by the implementation. A local test oracle verifies local implementation relations between individual component models and their implementation threads. Local tests are executed against individual threads, without testing the system as a whole. Verifying the local implementation relations implies that a corresponding global implementation relation holds between the complete system model and its implementation. Empirical results indicate that using local test oracles can significantly reduce the number of executed test sequences.
Article
In the verification of safety-critical real-time systems, the problem of determining the worst-case execution time (WCET) of a task is of utmost importance. Safe formal methods have been established for solving the single-task, single-core WCET problem. The de facto standard approach uses abstract interpretation to derive basic block execution times and a combinatorial path analysis which derives the longest path through the program. WCET analyses for multi-core computers have extended this methodology by assuming that shared resources are partitioned in either time or space and that therefore each core can still be analyzed separately. For real-world multi-cores this assumption is often not true, making the classic WCET analysis approach either inapplicable or very imprecise. To overcome this, we present a technique to explore the interleavings of a parallel task system as well as an exclusion criterion to prove that certain interleavings can never occur. We show how this technique can be integrated into existing WCET analysis approaches and finally show that the average WCET of a collection of real-time benchmarks could be reduced by a factor of up to 11.96 using this new analysis type.
Article
A large body of data-flow analyses exists for analyzing and optimizing sequential code. Unfortunately, much of it cannot be directly applied on parallel code, for reasons of correctness. This paper presents a technique to automatically, aggressively, yet safely apply sequentially-sound data-flow transformations, without change , on shared-memory programs. The technique is founded on the notion of program references being "siloed" on certain control-flow paths. Intuitively, siloed references are free of interference from other threads within the confines of such paths. Data-flow transformations can, in general, be unblocked on siloed references. The solution has been implemented in a widely used compiler. Results on benchmarks from SPLASH-2 show that performance improvements of up to 41% are possible, with an average improvement of 6% across all the tested programs over all thread counts.
Article
Over the past few years, a number of research investigations have been initiated for static analysis of concurrent and distributed software. In this paper we report on experiments with various optimization techniques for reachability-based deadlock detection in Ada programs using Petri net models. Our experimental results show that various optimization techniques are mutually beneficial with respect to the effectiveness of the analysis.
Article
This paper relates experience with building and using a programmable sequencing analyzer based on data flow analysis algorithms. An earlier paper described both the motivation for and the specification of Cecil, a powerful language for defining constraints on the sequencing of events and gave an algorithm for mapping the sequencing specifications defined by Cecil to data flow analysis algorithms. In this paper, we sketch the architecture of Cesar, a system for carrying out the analysis of Cecil sequencing constraints, describe the problems arising in the analysis of real-world programs, and indicate how we resolved these problems. Finally, we describe our experience in using Cesar, citing speed and efficiency characteristics of the current implementation, and suggesting the error-detection features and powers of Cesar.
Conference Paper
Precise dynamic race detectors report an error if and only if more than one thread concurrently exhibits conflict on a memory access. They insert instrumentations at compile-time to perform runtime checks on all memory accesses to ensure that all races are captured and no spurious warnings are generated. However, a dynamic race check for a particular memory access statement is guaranteed to be redundant if the statement can be statically identified as thread interference-free. Despite significant recent advances in dynamic detection techniques, the redundant check remains a critical factor that leads to prohibitive overhead of dynamic race detection for multithreaded programs. In this paper, we present a new framework that eliminates redundant race check and boosts the dynamic race detection by performing static optimizations on top of a series of thread interference analysis phases. Our framework is implemented on top of LLVM 3.5.0 and evaluated with an industry dynamic race detector TSAN which is available as a part of LLVM tool chain. 11 benchmarks from SPLASH2 are used to evaluate the effectiveness of our approach in accelerating TSAN by eliminating redundant interference-free checks. The experimental result demonstrates our new approach achieves from 1.4x to 4.0x (2.4x on average) speedup over original TSAN under 4 threads setting, and achieves from 1.3x to 4.6x (2.6x on average) speedup under 16 threads setting.
Article
The Arcturus system demonstrates several important principles that will characterize advanced Ada programming support environments. These include conceptual simplicity, tight coupling of tools, and effective command and editing concepts. Arcturus supports interactive program development and permits the combined use of interpretive and compiled execution. Arcturus is not complete however, as practical, mature environments for Ada must also support the development, analysis, testing, and debugging of concurrent programs. These issues are currently being explored. Arcturus, therefore is a platform for experimental exploration of key programming environment issues. This paper focuses primarily on the current system, describing and illustrating some of its components, while issues less fully developed are more briefly described.
Article
One important issue in parallel program debugging is the efficient detection of access anomalies caused by uncoordinated accesses to shared variables. On-the-fly detection of access anomalies has two advantages over static analysis or post-mortem trace analysis. First, it reports only actual anomalies during execution. Second, it produces shorter traces for post-mortem analysis purposes if an anomaly is detected, since generating further trace information after the detection of an anomaly is of dubious value. Existing methods for on-the-fly access anomaly detection suffer from performance penalties since the execution of the program being debugged has to be interrupted on every access to shared variables. In this paper, we propose an efficient cachebased access anomaly detection scheme that piggybacks on the overhead already paid by the underlying cache coherence protocol.
Article
Understanding synchronization is important for a parallel programming tool that uses dependence analysis as the basis for advising programmers on the correctness of parallel constructs. This paper discusses static analysis methods that can be applied to parallel programs with event variable synchronization. The objective is to be able to predict potential data races in a parallel program. The focus is on how dependencies and synchronization statements inside loops can be used to analyze complete programs with parallel loop and parallel case style parallelism.
Article
OpenSHMEM Analyzer (OSA) is a compiler-based tool that provides static analysis forOpenSHMEMprograms. It was developed with the intention of providing feedback to the users about semantics errors due to incorrect use of the OpenSHMEM API in their programs, thus making development of OpenSHMEMapplications an easier task for beginners as well as experienced programmers. In this paper we discuss the improvements to theOSA tool to perform parallel analysis to detect collective synchronization structure of a program. Synchronization is a critical aspect of all programming models and in OpenSHMEMit is the responsibility of the programmer to introduce synchronization calls to ensure the completion of communication among processing elements (PEs) to prevent use of old/incorrect data, avoid deadlocks and ensure data race free execution keeping in mind the semantics of OpenSHMEM library specification. Our analysis yields three tangible outputs: a detailed control flow graph (CFG) making all the OpenSHMEM calls used, a system dependence graph and a barrier tree. The barrier tree represents the synchronization structure of the programpresented in a simplisticmanner that enables visualization of the program's synchronization keeping in mind the concurrent nature of SPMD applications that use OpenSHMEM library calls. This provides a graphical representation of the synchronization calls in the order in which they appear at execution time and how the different PEs in OpenSHMEMmay encounter them based upon the different execution paths available in the program. Our results include the summarization of the analysis conducted within themiddle-endof a compiler and the improvementswe have done to the existing analysis to make it aware of the parallelism in the OpenSHMEM program.
Article
The storage of vector clock among various processes in asynchronous communication is studied. In view of the shortcomings of the static storage technique of vector clock, an approach for the dynamic space allocating of vector clock is proposed based on the present event monitoring method and the monitoring characteristics of the distributed program debugging. The experimental results show that the approach can decrease the burden of program debugging and obviously improve the efficiency of the distributed program debugging.
Article
Master the essentials of concurrent programming,including testing and debugging. This textbook examines languages and libraries for multithreaded programming. Readers learn how to create threads in Java and C++, and develop essential concurrent programming and problem-solving skills. Moreover, the textbook sets itself apart from other comparable works by helping readers to become proficient in key testing and debugging techniques. Among the topics covered, readers are introduced to the relevant aspects of Java, the POSIX Pthreads library, and the Windows Win32 Applications Programming Interface. The authors have developed and fine-tuned this book through the concurrent programming courses they have taught for the past twenty years. The material, which emphasizes practical tools and techniques to solve concurrent programming problems, includes original results from the authors' research. Chapters include: Introduction to concurrent programming; The critical section problem; Semaphores and locks; Monitors; Message-passing; Message-passing in distributed programs; Testing and debugging concurrent programs. As an aid to both students and instructors, class libraries have been implemented to provide working examples of all the material that is covered. These libraries and the testing techniques they support can be used to assess student-written programs. Each chapter includes exercises that build skills in program writing and help ensure that readers have mastered the chapter's key concepts. The source code for all the listings in the text and for the synchronization libraries is also provided, as well as startup files and test cases for the exercises. This textbook is designed for upper-level undergraduates and graduate students in computer science. With its abundance of practical material and inclusion of working code, coupled with an emphasis on testing and debugging, it is also a highly useful reference for practicing programmers.
Article
The Arcturus system demonstrates several important principles that will characterize advanced Ada programming support environments. These include conceptual simplicity, tight coupling of tools, and effective command and editing concepts. Arcturus supports interactive program development and permits the combined use of interpretive and compiled execution. Arcturus is not complete however, as practical, mature environments for Ada must also support the development, analysis, testing, and debugging of concurrent programs. These issues are currently being explored. Arcturus, therefore is a platform for experimental exploration of key programming environment issues. This paper focuses primarily on the current system, describing and illustrating some of its components, while issues less fully developed are more briefly described.
Article
Concurrent programs exhibit nondeterministic behavior in that multiple executions thereof with the same input might produce different sequences of synchronization events and different results. This is because different executions of a concurrent program with the same input may exhibit different interleavings. Thus, one of the major issues in the testing of concurrent programs is how to explore different interleavings or exhaust all the possible interleavings of the target programs. However, for terminating concurrent programs that have cyclic state spaces due to using iterative statements such as busy-waiting loops, they might have an infinite number of feasible synchronization sequences; that is, there is an infinite number of possible interleavings, which makes it impossible to explore all the possible interleavings for this type of concurrent program. To overcome this problem, we propose a testing scheme called dynamic effective testing that can perform state-cover testing for nondeterministic terminating concurrent programs with an infinite number of synchronization sequences. Dynamic effective testing does not require static analysis of the target concurrent program or the assistance of a model checker, and thus is loosely coupled to the syntax of the target concurrent program. It only needs to analyze sequences of synchronization events produced by the execution of the concurrent programs for race detection and state-traversal control. Therefore, the method is easy to port to different programming languages. In addition, only reiterated states discovered in a single SYN-sequence need to be stored. The implementation and experimental results obtained with real code demonstrate that source-code-level dynamic testing can be systematically performed on nondeterministic concurrent programs with infinite synchronization sequences.
Conference Paper
VeriSoft is a tool for systematically exploring the state spaces of systems composed of several concurrent processes executing arbitrary code written in full-fledged programming languages such as C or C++. The state space of a concurrent system is a directed graph that represents the combined behavior of all concurrent components in the system. By exploring its state space, VeriSoft can automatically detect coordination problems between the processes of a concurrent system.We report in this paper our analysis with VeriSoft of the "Heart-Beat Monitor" (HBM), a telephone switching application developed at Lucent Technologies. The HBM of a telephone switch determines the status of different elements connected to the switch by measuring propagation delays of messages transmitted via these elements. This information plays an important role in the routing of data in the switch, and can significantly impact switch performance.We discuss the steps of our analysis of the HBM using VeriSoft. Because no modeling of the HBM code is necessary with this tool, the total elapsed time before being able to run the first tests was on the order of a few hours, instead of several days or weeks that would have been needed for the (error-prone) modeling phase required with traditional model checkers or theorem provers.We then present the results of our analysis. Since VeriSoft automatically generates, executes and evaluates thousands of tests per minute and has complete control over nondeterminism, our analysis revealed HBM behavior that is virtually impossible to detect or test in a traditional lab-testing environment. Specifically, we discovered flaws in the existing documentation on this application and unexpected behaviors in the software itself. These results are being used as the basis for the redesign of the HBM software in the next commercial release of the switching software.
Article
An in-depth review of key techniques in software error detection. Software error detection is one of the most challenging problems in software engineering. Now, you can learn how to make the most of software testing by selecting test cases to maximize the probability of revealing latent errors. Software Error Detection through Testing and Analysis begins with a thorough discussion of test-case selection and a review of the concepts, notations, and principles used in the book. Next, it covers: Code-based test-case selection methods. Specification-based test-case selection methods. Additional advanced topics in testing. Analysis of symbolic trace. Static analysis. Program instrumentation Each chapter begins with a clear introduction and ends with exercises for readers to test their understanding of the material. Plus, appendices provide a logico-mathematical background, glossary, and questions for self-assessment. Assuming a basic background in software quality assurance and an ability to write nontrivial programs, the book is free of programming languages and paradigms used to construct the program under test. Software Error Detection through Testing and Analysis is suitable as a professional reference for software testing specialists, software engineers, software developers, and software programmers. It is also appropriate as a textbook for software engineering, software testing, and software quality assurance courses at the advanced undergraduate and graduate levels.
Article
Deadlock detection for concurrent systems via static analysis is in general difficult because of state-space explosion; indeed it is PSPACE compete. This paper presents a new method to detect the deadlocks. A concurrent system consisting of several processes that communicate using a resource sharing mechanism is represented by a set of ordinary differential equations of a restricted type. The equations describe the system state changes, and their solutions, also called state measures, indicate the extent to which the state can be reached in execution. Based on the solutions, the resource deadlock can be detected. By taking into account the computation errors of numerical solution for the differential equations, the detection can be performed via a MATLAB solver, as shown in the experiments. The complexity of the proposed method is polynomial.
Article
We present a generic aproach to the static analysis of concurrent programs with procedures. We model programs as communicating pushdown systems. It is known that typical dataflow problems for this model are undecidable, because the emptiness problem for the intersection of context-free languages, which is undecidable, can be reduced to them. In this paper we propose an algebraic framework for defining abstractions (upper approximations) of context-free languages. We consider two classes of abstractions: finite-chain abstractions, which are abstractions whose domains do not contain any infinite chains, and commutative abstractions corresponding to classes of languages that contain a word if and only if they contain all its permutations. We show how to compute such approximations by combining automata theoretic techniques with algorithms for solving systems of polynomial inequations in Kleene algebras.
Conference Paper
A structured presentation of a proof system for CSP programs is given. The presentation is based on the approach of Apt, Francez and de Roever [AFR]. Its new aspects are the use of static analysis and of proofs from assumptions instead of proof outlines. Also, in contrast to [AFR] total correctness is studied.
Article
Full-text available
The constrained expression approach to analysis of concurrent software systems can be used with a variety of design and programming languages and does not require a complete enumeration of the set of reachable states of the concurrent system. The construction of a toolset automating the main constrained expression analysis techniques and the results of experiments with that toolset are reported. The toolset is capable of carrying out completely automated analyses of a variety of concurrent systems, starting from source code in an Ada-like design language and producing system traces displaying the properties represented bv the analysts queries. The strengths and weaknesses of the toolset and the approach are assessed on both theoretical and empirical grounds
Article
In order to understand and analyze real-time distributed programs, one must account for interactions between proceses. Unfortunately, these interactions can be quite complex due to concurrency and nondeterminism. This paper describes a framework for automated static analysis of distributed programs written in Ada. The analysis is aimed at discovery of a program's potential tasking behavior, that is, behavior in terms of tasking-related issues. Central to the framework is the translation of a program into an abstract grammar system that represents a Petri net graph model.
Article
The paper describes a reverse engineering process for producing design level documents by static analysis of ADA code. The produced documents, which we call concurrent data flow diagrams, describe the task structure of a software system and the data flow between tasks. Firstly, concurrent data flow diagrams are defined and discussed and then the main characteristics and features of the reconstruction process are illustrated. The process has been used to support maintenance and reuse activities on existing real-time software and to check consistency between design and code.
Article
This chapter presents a number of ideas that originated an evolution of programming from arts and crafts to a science. The chapter describes computer arithmetic in two stages. In the first stage, axioms are given for arithmetic operations on natural numbers, which are valid independently of their computer representation, and choices of supplementary axioms are proposed for characterizing various possible implementations. In the second part, an axiomatic definition of program execution is introduced. An axiomatic approach is indispensable for achieving program reliability. The usefulness of program proving is advocated in view of the cost of programming errors and program testing. The chapter discusses the definition of formal language. The axioms and rules of inference can be understood as the ultimate definitive specification of the meaning of the language.
Article
Full-text available
Broth cultures of hemolytic streptococci derived from patients are capable of rapidly liquefying normal human fibrin clot. The active fibrinolytic principle is also contained in sterile, cell-free filtrates of broth cultures. The degree of activity of filtrates parallels the activity of whole broth cultures sufficiently closely to indicate that large amounts of the fibrinolytic substance are freely excreted into the surrounding medium and pass readily through Berkefeld V, Seitz, and Chamberland filters. The occurrence of fibrinolysis is most strikingly observed when plasma or fibrinogen is mixed with active cultural material before clot formation is effected. Under the standard experimental conditions described, complete dissolution of human plasma clot (whole oxalated plasma + CaCl(2)) occurs in about 10 minutes; complete dissolution of human fibrinogen clot (chemically isolated fibrinogen + thrombin) takes place in about 2 minutes. Titration of filtrate activity is recorded in Table IV. Twenty-eight strains of Streptococcus hemolyticus, isolated from patients suffering from various manifestations of streptococcus infection, have been tested for the capacity to liquefy fibrin clot. Broth cultures of all of the strains induced fibrinolysis. Of 18 strains of Streptococcus hemolyticus of animal origin, only three were capable of causing dissolution of clot. Completely negative results were obtained with 38 strains of other bacterial species. The list is presented on pages 492 and 493. The plasma of many patients recovered from acute hemolytic streptococcus infections, when clotted in the presence of active cultures, is highly resistant to fibrinolysis. Furthermore, serum, derived from patients whose plasma clot is resistant, often confers on normal plasma clot an antifibrinolytic property. One example of the resistance possessed by the blood of convalescent patients is presented in this report. A second paper, now in preparation, will give in detail a large number of observations on the relation of infection to the development of resistance to the fibrinolytic activity of hemolytic streptococci. In contrast to the susceptibility of normal human fibrin clot to liquefaction by active culture, normal rabbit fibrin clot is totally resistant to dissolution when tested under comparable conditions. The insusceptibility of rabbit fibrin clot is manifest provided the coagulum is composed of rabbit constituents. When human thrombin is used to clot rabbit plasma or fibrinogen in the presence of active cultures, fibrinolysis is not prohibited. The rôle of thrombin in determining the resistance or susceptibility of rabbit fibrin to dissolution offers a suggestive approach to problems relating to the underiving mechanism.
Article
Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency. The algorithms employ data flow analysis techniques. First used in compiler object code optimization, the techniques have more recently been used in the detection of variable usage errors in single process programs. By adapting these existing algorithms, the same classes of variable usage errors can be detected in concurrent process programs. Important classes of errors unique to concurrent process programs are also described, and algorithms for their detection are presented.
Conference Paper
HAL/S is an advanced real-time higher order programming language specifically designed to meet stringent reliability and performance criteria. The HAL/S design emphasizes reliability and reduced maintenance costs through strict compiler checking, facilities for modular/structured programming, and standard, fully annotated and readable, program listings. Currently, HAL/S is being used for the programming of the Space Shuttle's onboard computers and has recently been established by NASA as a standard for flight software. The HAL/S programming system is described. Primary emphasis is given to the language itself; however, the compiler, execution and diagnostic systems, and program statistics packages are presented.
Article
This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a fundamental program structuring method. When combined with a development of Dijkstra's guarded command, these concepts are surprisingly versatile. Their use is illustrated by sample solutions of a variety of a familiar programming exercises.
Article
Algorithms are presented for detecting errors and anomalies in programs which use synchronization constructs to implement concurrency. The algorithms employ data flow analysis techniques. First used in compiler object code optimization, the techniques have more recently been used in the detection of variable usage errors in dngle process programs. By adapting these existing algorithms, the sane classes of variable usage errors can be detected in concurrent process programs. Important classes of errors unique to concurrent process programs are also described, and algorithms for their detection are presented.
Article
Ada is the result of a collective effort to design a common language for programming large scale and real-time systems. This report is the proposed standard document for Ada. Since the early 70's, DoD has been committed to the use of High Order Languages and has developed an extensive set of requirements for a single language which could be adopted for use throughout DoD. In April 1978, the language designed by Jean Ichbiah and his team at Honeywell/Cii-Honeywell Bull, Paris, France, was chosen to be the new DoD language. For the ensuing fourteen months, Ada was subjected to an intense period of test and evaluation by the international computer science community and the design was modified in response to language issues raised during the test and evaluation. It will be added to the list of approved languages for use in DoD systems and promises to have a significant impact on DoD software development. Ada will be reviewed under American National Standards Institute (ANSI) canvass procedures for designation as an American National Standard and is on the agenda for the International Standards Organization (ISO). The DoD is concentrating on the development of compilers and an integrated programming support environment for Ada software. Although developed for use in DoD systems, Ada has generated considerable interest in the international computing community, particularly NATO, as well as in the U.S. commercial computing community. (Author)
Article
A language concept for concurrent processes without common variables is introduced. These processes communicate and synchronize by means of procedure calls and guarded regions. This concept is proposed for real-time applications controlled by microcomputer networks with distributed storage. The paper gives several examples of distributed processes and shows that they include procedures, coroutines, classes, monitors, processes, semaphores, buffers, path expressions, and input/output as special cases.
Article
This paper describes DAVE, a system for analysing Fortran programs. DAVE is capable of detecting the symptoms of a wide variety of errors In programs, as well as assuring the absence of these errors. In addition, DAVE exposes and documents subtle data relations and flows within programs. The central analytic procedure used is a depth first search. DAVE itself is written in Fortran. Its implementation at the University of Colorado and some early experience are described.
Article
Symbolic testing and a symbolic evaluation system called DISSECT are described. The principle features of DISSECT are outlined. The results of two classes of experiments in the use of symbolic evaluadon are summarized. Several classes of program errors are defined and the reliability of symbolic testing in finding bugs is related to the classes of errors. The relationship of symbolic evaluation systems like DISSECT to classes of program errors and to other kinds of program testing and program analysis tools is also discussed. Desirable improvements in DISSECT, whose importance was revealed by the experiments, are mentioned.
Article
Symbolic testing and a symbolic evaluation system called DISSECT are described. The principle features of DISSECT are outlined. The results of two classes of experiments in the use of symbolic evaluadon are summarized. Several classes of program errors are defined and the reliability of symbolic testing in finding bugs is related to the classes of errors. The relationship of symbolic evaluation systems like DISSECT to classes of program errors and to other kinds of program testing and program analysis tools is also discussed. Desirable improvements in DISSECT, whose importance was revealed by the experiments, are mentioned.
Complexity of Analyzing Concurrent Programs
  • R N Taylor
Department of Defense Feb. 1980. Buxton J. N. and Stanning V. Requirements for Ada programming support environments: "Stoneman
  • J N Buxton
  • V Stanning