Figure 1 - uploaded by Ingo Stürmer
Content may be subject to copyright.
Example of a MTCD test case specification 

Example of a MTCD test case specification 

Source publication
Article
Full-text available
Requirements-based functional testing of model-based embedded software is a crucial requirement of the ISO 26262 safety standard for passenger cars [1]. Test assessment of requirements-based test cases is a laborious task and checking test results manually is prone to error. The intent of this paper is as follows:. We introduce a method for require...

Context in source publication

Context 1
... test specification is in table form. For example MS Excel can be used to draft MTCD test specification. A well-defined MTCD test case is shown in Figure 1. Like other well-defined test specifications, the definition of a test case in MTCD consists of a number of basic attributes: a distinct test ID, a name, the description of the test case or test group, the requirements covered by this test case, the initialization or precondition of the test case, the test case action and an additional type component to specify the entry type of the actual table row. The components of an MTCD test case are summarized in Table 1. The signal and parameter initialization defined in a ‘test-group’ will be applied to all test cases in this group. A well-defined ‘test-group’ should contain an identification number, distinct name, description and signal initialization. In the case of defining a test case it is necessary to enter the Test ID, Name, Requirements, Description, Initialization, Action, and expected ...

Citations

... Simulation-Based Software Testing, a widely used technique to test Simulink models [8], relies on simulations to detect software failures. Simulation test cases are either manually specified by the user (e.g., [14], [15], [16]) or are automatically generated (e.g., [6], [17], [18], [19], [20], [21], [22], [23]). ...
... Manual test case specification builds on human capabilities (e.g., domain knowledge) to define test cases. However, the manual definition of test cases is a laborious task especially for large-scale industrial projects [22]. Therefore, there is a need for automated support to alleviate some of the manual effort. ...
... Automatically generating failure-revealing test cases is a widely recognized software engineering problem [62], [63]. Providing support for Test Sequence and Test Assessment blocks is a significant problem since these blocks are widely used by practitioners [64], [65] and support standard compliance [66], [22]. There is no approach that solves this problem: HECATE is the first solution addressing the problem. ...
Article
Full-text available
Simulation-based software testing supports engineers in finding faults in Simulink® models. It typically relies on search algorithms that iteratively generate test inputs used to exercise models in simulation to detect design errors. While simulation-based software testing techniques are effective in many practical scenarios, they are typically not fully integrated within the Simulink environment and require additional manual effort. Many techniques require engineers to specify requirements using logical languages that are neither intuitive nor fully supported by Simulink, thereby limiting their adoption in industry. This work presents HECATE, a testing approach for Simulink models using Test Sequence and Test Assessment blocks from Simulink® Test<sup>TM</sup>. Unlike existing testing techniques, HECATE uses information from Simulink models to guide the search-based exploration. Specifically, HECATE relies on information provided by the Test Sequence and Test Assessment blocks to guide the search procedure. Across a benchmark of 18 Simulink models from different domains and industries, our comparison of HECATE with the state-of-the-art testing tool S-Taliro indicates that HECATE is both more effective (more failure-revealing test cases) and efficient (less iterations and computational time) than S-Taliro for ≈94% and ≈83% of benchmark models respectively. Furthermore, HECATE successfully generated a failure-revealing test case for a representative case study from the automotive domain demonstrating its practical usefulness.</p
... Simulation-Based Software Testing, a widely used technique to test Simulink models [8], relies on simulations to detect software failures. Simulation test cases are either manually specified by the user (e.g., [14], [15], [16]) or are automatically generated (e.g., [6], [17], [18], [19], [20], [21], [22], [23]). ...
... Manual test case specification builds on human capabilities (e.g., domain knowledge) to define test cases. However, the manual definition of test cases is a laborious task especially for large-scale industrial projects [22]. Therefore, there is a need for automated support to alleviate some of the manual effort. ...
... For this reason, this paper proposes an automated test case generation technique that arXiv:2212.11589v1 [cs.SE] 22 Dec 2022 supports Test Sequence and Test Assessment blocks (together referred to as Test Blocks for brevity in the rest of the paper). ...
Preprint
Full-text available
Simulation-based software testing supports engineers in finding faults in Simulink models. It typically relies on search algorithms that iteratively generate test inputs used to exercise models in simulation to detect design errors. While simulation-based software testing techniques are effective in many practical scenarios, they are typically not fully integrated within the Simulink environment and require additional manual effort. Many techniques require engineers to specify requirements using logical languages that are neither intuitive nor fully supported by Simulink, thereby limiting their adoption in industry. This work presents HECATE, a testing approach for Simulink models using Test Sequence and Test Assessment blocks from Simulink Test. Unlike existing testing techniques, HECATE uses information from Simulink models to guide the search-based exploration. Specifically, HECATE relies on information provided by the Test Sequence and Test Assessment blocks to guide the search procedure. Across a benchmark of 16 Simulink models from different domains and industries, our comparison of HECATE with the state-of-the-art testing tool S-TALIRO indicates that HECATE is both more effective (more failure-revealing test cases) and efficient (less iterations and computational time) than S-TALIRO for ~94% and ~81% of benchmark models respectively. Furthermore, HECATE successfully generated a failure-revealing test case for a representative case study from the automotive domain demonstrating its practical usefulness.
... Similarly, temporal logic [15] can be used to test boundary regions based on operational requirements. These methods are useful during early stage development, such as model-in-theloop (MIL) testing, and are necessary to ensure all requirements are fulfilled [16]. These methods are based upon expert knowledge of the internal structure of the System Under Test (SUT), or are hand designed for requirement verification [17]- [19]. ...
Article
Trends in the automotive industry confirm that the demand for testing of embedded systems, especially advanced driver assistance systems (ADAS), will grow dramatically in the near future. This paper proposes a new solution that automates the detection of software defects in embedded systems. The solution consists of a data-driven sampling algorithm to intelligently sample the testing space by sequentially generating test cases. Moreover, it segregates different defects from each other and identifies the signals that trigger each. The results are compared against other automated methods for defect identification and analysis, and it is found that this novel solution is able to identify defects more rapidly. In addition, it correctly separates defects and reliably reproduces each distinct defect.
... [7][8] [9][10] focus on high level test planning, test frameworks, testing workflows and the role of Hardware-in-the-loop testing respectively. Where model based approaches are suggested, they relate to the use of model based development with fault injection using mutation testing at a function level rather than system level [11] [12], or the use of automated testing for ISO 26262 requirements-based functional testing of model-based embedded software [13]. Other publications focus on static testing for structural analysis rather than requirement testing, evaluating performance attributes such as timing, stack overflows etc. [14]; monitoring proof obligations by the use of an on-chip, real-time runtime verification monitor [15]. ...
Chapter
ISO 26262-6 provides requirements for the development of safety automotive software applications and establishes a set of methods that must be applied in the different software development and validation activities depending on the criticality level (ASIL) allocated to the software components. When adopting ISO 26262-6, organizations must respond to the requirements in the standard, and identify how they are going to implement the different methods and controls. In the case of AUTOSAR Classic software developments using the C and C++ programming languages, the industry has previously documented references on how to adapt MISRA coding guidelines to respond to ISO 26262 requirements, but no equivalent proposal has been proposed and discussed in the context of developments based on AUTOSAR Adaptive developments. This paper proposes the tailoring of AUTOSAR Coding Guidelines for C++, which is the coding standard typically used in AUTOSAR Adaptive developments, to respond to the requirements in the ISO standard.