Conference PaperPDF Available

A Conceptual Safety Supervisor Definition and Evaluation Framework for Autonomous Systems

Authors:

Abstract and Figures

The verification and validation (V&V) of autonomous systems is a complex and difficult task, especially when artificial intelligence is used to achieve autonomy. However, without proper V&V, sufficient evidence to argue safety is not attainable. We propose in this work the use of a Safety Supervisor (SSV) to circumvent this issue. However, the design of an adequate SSV is a challenge in itself. To assist in this task, we present a conceptual framework and a corresponding metamodel, which are motivated and justified by existing work in the field. The conceptual framework supports the alignment of future research in the field of runtime safety monitoring. Our vision is for the different parts of the framework to be filled with exchangeable solutions so that a concrete SSV can be derived systematically and efficiently, and that new solutions can be embedded in it and get evaluated against existing approaches. To exemplify our vision, we present an SSV that is based on the ISO 22839 standard for forward collision mitigation.
Content may be subject to copyright.
A Conceptual Safety Supervisor Definition and
Evaluation Framework for Autonomous Systems
Patrik Feth, Daniel Schneider, Rasmus Adler
Fraunhofer Institute for Experimental Software Engineering
name.surname@iese.fraunhofer.de
Abstract. The verification and validation (V&V) of autonomous sys-
tems is a complex and difficult task, especially when artificial intelligence
is used to achieve autonomy. However, without proper V&V, sufficient
evidence to argue safety is not attainable. We propose in this work the
use of a Safety Supervisor (SSV) to circumvent this issue. However, the
design of an adequate SSV is a challenge in itself. To assist in this task, we
present a conceptual framework and a corresponding metamodel, which
are motivated and justified by existing work in the field. The concep-
tual framework supports the alignment of future research in the field
of runtime safety monitoring. Our vision is for the different parts of the
framework to be filled with exchangeable solutions so that a concrete SSV
can be derived systematically and efficiently, and that new solutions can
be embedded in it and get evaluated against existing approaches. To ex-
emplify our vision, we present an SSV that is based on the ISO 22839
standard for forward collision mitigation.
1 Introduction
Ever since software has been used to control machines, its role in this task has
expanded continuously. In order to fulfill the ever-increasing number of func-
tional and non-functional requirements, software is becoming more and more
complex. Currently we are witnessing that the requirement to act autonomously
is gaining importance. We consider autonomy not as the capability to act with-
out direct operator commands but as the capability to act without a predefined
behavior specification. To fulfill this need in cases where complex environment
perception and complex decision making are necessary, techniques known from
artificial intelligence, such as neural networks, are being introduced as part of
classical control systems. This brings a new class of complexity into these poten-
tially safety-critical systems: hard to analyze can become not analyzable. Even
bigger than the complexity problem is the problem of autonomy. While most
established safety engineering techniques consider deviations from the intended
functionality, the creation of this intended functionality is now the systems re-
sponsibility and can thus become an additional safety threat. For these reasons,
most established V&V techniques, methods, and tools are not applicable for AI-
controlled systems. Still, we need to gain confidence in their safety if they are to
be used in a real environment.
An approach for addressing the complexity problem that is already established
fairly well is runtime verification [21]. Runtime verification is a means for con-
tinuously verifying properties during runtime. This is in contrast to verifying
them once and for all at development time already, which might be infeasible or
even impossible in some cases. The system is steered into a safe state before or
after a violation of the properties happens. In its essence, runtime verification
is concerned with the correctness, i.e., the correct implementation of a given
specification, of the system. By shifting the verification of properties to runtime,
runtime verification addresses the problem that the increasing complexity of the
system makes high coverage testing and analysis infeasible. Yet these runtime
verification approaches still need a precise specification that is checked at run-
time. However, autonomy realized with the help of AI techniques is explicitly
used to eliminate the need for a precise specification of the system behavior in
every possible situation. Thus, classical runtime verification is not sufficient to
guarantee the safety of autonomous systems, and an additional runtime moni-
toring approach is needed that focuses on safety as the absence of unreasonable
risk. We are using the term Safety Supervisor (SSV) for this class of monitor-
ing approaches. The term Supervisor emphasizes that the SSV has the final say
about the control of the system.
Safety engineering for traditional systems, as with the ISO 26262 [9], is usually
concerned with functional safety. Functional safety considers malfunctions as de-
viations from a defined intention, usually the operator input. It is the system’s
responsibility to follow this input as closely as possible even in the presence of
random, unavoidable hardware failures. Because of that, a safety analysis, e.g.
a Fault Tree Analysis, may look a lot like a reliability analysis. We argue that
for the new class of autonomously acting systems, systematic achievement of a
safe system behavior is increasingly becoming the focus of core system devel-
opment, e.g. by including a Safety Supervisor in the system architecture. This
is not covered by existing safety standards. The current discussion on the topic
safety of the intended functionality is a symptom of this development. As the
necessary safety supervisory systems are highly complex and can influence the
system behavior significantly, we see great potential in the use of a Safety Su-
pervisor Definition and Evaluation Framework (SSV DEF).
A definition and evaluation framework on the level of functional abstraction can
be used to support early design decisions for the development of an SSV. From
an engineering perspective, the framework can be used to conduct what-if analy-
ses, comparing different meaningful combinations of available solutions to arrive
at an evidence-based decision about which algorithms to choose for the further
development of a safety monitor. From a research perspective, the framework
can be used to guide and support future research in the field. New solutions can
be embedded in it and can be evaluated against existing approaches. The con-
tribution of this paper is a well-founded conceptual framework aimed at guiding
our future development of the definition and evaluation framework. The SSV
DEF will be instantiated for the automotive domain in our future work, but
the conceptual framework is domain-independent and can also be instantiated
for other domains, such as industry automation. As a further contribution, we
will give an overview of recent work regarding the elements of our conceptual
framework.
This paper is structured as follows: In section 2, we present our conceptual
framework for the definition of an SSV. The explanations of the different ele-
ments of the framework contain pointers to relevant related work in this area
and thus to design alternatives that can be considered when implementing a
Safety Supervisor. To illustrate the individual elements, we give an example of
a platoon driving system. Simulation results underline the benefit of analyzing
design alternatives early in the design process. Section 3 provides evidence for
the validity of our framework by analyzing existing runtime safety monitoring
approaches. Section 4 concludes the paper.
2 Conceptual Framework
Figure 1 presents our conceptual definition framework by means of a metamodel
for the safety supervision of autonomous systems and thus the main contribu-
tion of this work. The metamodel can be seen as a template that assists in
creating a concrete SSV as an instantiation of this model. The parts that form
the Safety World Model are motivated and justified by the related work analysis
presented later on. In addition to this, we see the necessity for a Risk Reduction
Strategy to decide which behavior shall be triggered if the current situation is
too critical. The Safety Argumentation explains the role of the Safety Super-
visor in guaranteeing system safety. In previous versions of the metamodel, we
focused more on the observability problem, i.e., on mapping internal variables
in the individual models to monitored and controlled variables of the system.
We stepped back from such a model as we see the observability problem as a
problem closer related to the implementation phase, and we decided to shift our
focus to the functional design phase. The main challenge that we see for this
phase of the development is how to choose the right models and algorithms to
create a functionally effective supervisor. The SSV DEF in intended to assist
in this step. The outcome is a functional specification of the Safety Supervisor
containing evidence regarding effectiveness. How the variables in the algorithms
are mapped to observable variables and how the SSV is implemented is dealt
with in the subsequent development steps. One future goal of our approach is the
development of proper tool support for creating an SSV specification in order
to move from a conceptual framework to a library-like Safety Supervisor Defi-
nition and Evaluation Framework, which will additionally assist in the creation
of evidences that can be used in safety argumentation. In the following, we will
go through the individual elements of the metamodel and explain their role in
the context of the SSV. For each of the elements that form the SSV, we give
an initial set of design considerations and point to related work in this field.
To develop a concrete example for the instantiation of the metamodel, we use a
forward collision avoidance system for truck platooning.
Fig. 1. SSV Metamodel: A conceptual framework for the definition of a Safety Super-
visor
2.1 Metamodel Elements
System: The Safety Supervisor is part of an autonomously acting system that
uses complex control algorithms in a safe case and is interrupted and overruled
by the SSV in a potentially unsafe case. This is in accordance with the SIM-
PLEX architecture introduced in [25]. We are considering a system specified
on the functional architecture level, for example using Simulink models. In the
SIMPLEX architecture, an SSV should be simple. Being too simple might end
up being too conservative and thus producing many false alerts. As a solution
to this issue, we consider a layered SSV design that contains very simple behav-
ior in a core layer and more complex behavior in an outer layer. This can be
supported by a definition framework by assigning costs to design alternatives.
Such a layered architecture additionally supports the fail-operational behavior
required for such a safety-critical component as the SSV. Detailed thoughts on
how to model an adaptive system with fail-operational behavior can be found in
[2]. We see potential in including this methodology in our SSV DEF.
In our example, we are supervising a control algorithm for platoon driving. In
platoon driving, two or more trucks are driving directly behind each other and
only the leading truck is operated by a human driver. The motivation behind
platooning is to relief drivers from the burden of driving, to save fuel due to op-
timal driving distance, and, last but not least, ideally to be safer than driving in
manual mode. The latter might be achieved based on the fact that response times
are generally faster for machines and, maybe even more importantly, they are
known factors and not subject to fluctuations as is the case for human response
times. Even for such a rather simple system, let alone for fully autonomous
driving, formal verification to prove that the distance between two trucks is al-
ways more than 0 m is not possible. Also, testing each and every possible truck
combination and every scenario on every possible track does not appear to be
feasible or would require the possible situation space to be limited significantly.
Consequently, monitoring in the form of an SSV makes sense for such a platoon
driving system.
Safety Supervisor: The safety supervisor is the core component of the frame-
work and responsible for performing the actual monitoring. To this end, it utilizes
the other entities of the framework; hence it is the central element of the meta-
model. Based on the information from the Safety World Model (explained later),
the SSV assesses the safety of the current situation. If this results in the decision
to initiate a countermeasure, a suitable one is selected automatically from the
set of Risk Reduction Strategies.
In the platoon driving system, we install a Safety Supervisor that is responsible
for avoiding forward collisions. The ISO 22839 standard for Intelligent transport
systems - Forward vehicle collision mitigation systems - Operation, performance
and verification requirements [10] defines forward collision as a collision between
a vehicle (subject vehicle) and the vehicle in front of this vehicle that is driv-
ing in the same direction (target vehicle). As the platoon driving system shall
only be used on highways and changing lanes is not considered as part of the
functionality, it makes sense to focus on such forward collision accidents for the
Safety Supervisor.
Safety World Model: This element is a container for the information needed
to assess the safety and thus the risk of a situation. The output is a decision
about whether the normal control algorithm is allowed to control the system
further or whether some countermeasures are needed to steer the system into
a safer state. Setting this into context of the SIMPLEX architecture [25], the
Safety World Model is a special form of Decision Logic focusing on safety. The
Safety World Model is composed of an internal representation of the current en-
vironment – the Situation Description – an understanding of how this situation
may evolve in the future – the Situation Prediction – and an assessment of the
risk of this situation – the Situation Risk Assessment. These elements have been
identified in the related work study. An approach that covers the full spectrum
of elements is presented in [24]. The three elements that build the Safety World
Model are dependent on each other. Only the state of those elements that are
part of the Situation Description can be predicted, and any quantification of the
risk of a situation highly depends on the available knowledge about the current
and possible future situations.
For the platoon driving example the elements of the Safety World Model will
be presented together after the Situation Risk Assessment element, as in the
ISO standard the metrics used for the Situation Risk Assessment are the most
explicitly represented element.
Situation Description: The highly dynamic environment in which autonomous
systems act needs to be modeled explicitly by a Situation Description. The first
decision for such a model regards which elements of the environment to consider.
For the automotive use case, alternatives include considering all elements in the
same lane, all elements in the same lane and in the adjacent lane, or all elements
within a certain radius. For other domains, other alternatives are possible. After
that decision, it needs to be decided which attributes of the elements to consider,
starting with the size, speed, or acceleration of the elements and proceeding to
more complex attributes, such as the value of an element or its probability of
existence. The final decision regarding which elements and which attributes to
consider must take into account that the Situation Description needs to contain
as much information about the environment as needed by the other elements of
the Safety Supervisor. Related work on the topic can be found in [11] [8] [12] or
[15].
Situation Prediction: The methods used for Situation Risk Assessment evalu-
ate a situation according to potential future harm. Predictions need to be made
covering the entire period from the point in time where the evaluation is per-
formed to the point in time where the harm might happen. Thus, the Situation
Risk Assessment is inherently based on prediction models. Lefvre et al. provide
a more detailed insight on this dependence in [17]. We demand that these pre-
diction models need to be made explicit. The models can only address elements
and attributes that are represented in the Situation Description and describe
how the attributes will evolve in the future. For the supervised system, the fu-
ture development can be based on the intended behavior of the AI system, if
known. For other elements in the Situation Description, the observed attributes
might influence the predictions. For example, if an element is classified as a
child, the prediction will differ from elements that are classified as trained traf-
fic participants. Potentially, a multitude of prediction models is possible, from
non-probabilistic and simple constant velocity / acceleration models via non-
probabilistic model-based prediction models to probabilistic models that may
be arbitrarily complex. A good trade-off between overcomplicated and oversim-
plified prediction models needs to be found. Overcomplicated models might show
a set of possible future situations that could be too large to handle, while over-
simplified models might not consider important future situations at all. Wiest
et al. propose a framework for probabilistic maneuver prediction in [31]. In this
framework, the prediction models are created with machine learning methods.
They show the application of this approach for the creation of a Situation Predic-
tion for an intersection. We find this approach very promising and see additional
potential in using comparable approaches for Situation Risk Assessment with
the use of data mining techniques from recorded vehicle data.
Situation Risk Assessment: The assessment of the risk of a situation can be
done qualitatively, as in [18], or quantitatively, as in [29]. If done quantitatively,
special safety metrics are needed. These metrics rate a situation regarding its
criticality, i.e., the risk that it may result in a harmful situation. A risk assess-
ment is needed to separate the situation space that the system might encounter
in a safe space, where the complex control algorithm is allowed to operate, and
a potentially unsafe space, where actions by the Safety Supervisor are needed to
keep accidents from happening. The space of possible metrics is limited by the
attributes considered in the Situation Description and the prediction models in
the Situation Prediction. The time to critical collision probability metric used in
[24] requires a probabilistic Situation Prediction while a simple time to collision
metric is not compatible with such a probabilistic model but works with a con-
stant relative velocity model. These strong dependencies among the elements of
the Safety World Model further motivate the use of an SSV DEF.
The ISO 22839 standard uses two time-to-collision metrics and gives two equa-
tions for the calculation of the metrics based on different assumptions. Equation
1 is used to calculate the time to collision, which is defined in the standard as
time that it will take a subject vehicle to collide with the target vehicle assuming
the relative velocity remains constant. Thus, the constant relative velocity pre-
diction is made explicit. Implicit is the prediction that both vehicles stay on the
collision course. Equation 2 is used to calculate the enhanced time to collision,
which is defined in the standard as time that it will take a subject vehicle to
collide with the target vehicle assuming the relative acceleration between the sub-
ject vehicle and the target vehicle remains constant. Again, the collision course
assumption is implicit.
xc
vr
(1)
(vT V vSV )p(vT V vS V )22(aT V aSV )xc
aT V aSV
(2)
xcis defined as the distance; vras the relative velocity (vTV vSV ); vT V as the
velocity of the target vehicle, i.e., the leading truck; vSV as the velocity of the
subject vehicle, i.e., the following truck; aSV and aT V as the respective acceler-
ation.
These time-to-collision metrics can be calculated using a very simple Situation
Description that exists of one fixed trajectory on which the subject vehicle trav-
els and a potential target vehicle that travels in front of it in the same direction
on the same trajectory with a certain distance, velocity, and acceleration. This
model already allows the calculation of the time-to-collision metric using the
equations 1 and 2. The simplicity of this Situation Description directly shows
the limits of the SSV that we are instantiating for the platoon driving system.
Static objects, vehicles in other lanes, or any other vehicles besides the subject
and target vehicles are not considered in the representation of the environment.
Consequently, no criticality metric refers to these elements and they are not con-
sidered in the risk assessment of the current situation.
By applying the ISO 22839 standard, we are using two prediction models for
the Situation Prediction. The first is a constant-relative-velocity model used for
the calculation in equation 1 and the second is a constant-relative-acceleration
model used for the calculation in equation 2. In both prediction models, it is
assumed that the vehicles keep on traveling along the same trajectory.
Risk Reduction Strategy: The Safety Supervisor uses the Safety World Model
to determine whether to become active. Once the decision to become active has
been made, the SSV needs to select a behavior that will lead to a less critical
situation. The knowledge needed to select the right behavior strategy is encap-
sulated in the Risk Reduction Strategy element. Also, different solutions are
possible in this field. The strategies can be derived by solving an optimization
problem regarding the Safety World Model and considering the control capa-
bilities given to the Safety Supervisor. Thoughts on this can be found in [4].
Alternatively, the set of strategies can be fixed as proposed in [27]. Considering
the selection of an adequate behavior as an optimization problem might be a
promising solution, but adds complexity to the SSV. Following the idea of a
layered Safety Supervisor presented above, such complex behavior can be con-
sidered on an outer layer that is only used if resources are available, while in
other cases a simpler Risk Reduction Strategy is used.
In the ISO 22830 standard, strategies are recommended based on the value of
the time-to-collision metrics. For rather high values, the standard recommends a
driver warning while for low values, the system shall actively perform a braking
maneuver. The definition of the exact thresholds is left to the producer of the
system. However, an SSV DEF could also assist in this step.
Safety Argumentation: It is hard to design a compelling safety argumenta-
tion for an autonomous system, in particular when AI algorithms are involved.
Actions towards this goal can be found in [26]. It is an interesting and highly
important, but still open question which role a Safety Supervisor can play in an
overall safety concept for autonomous systems. Especially in the domain of au-
tonomous vehicles, we can see that modern cars already contain systems such as
collision avoidance systems that override the input of the human driver to avoid
or mitigate the consequences of a collision in very critical situations. These sys-
tems act as a Safety Supervisor for the human driver and we expect high reuse of
such systems for the supervision of autonomous vehicles. Nevertheless, the func-
tionality of such existing avoidance and mitigation systems needs to be placed
into the context of a compelling safety argumentation for autonomous systems.
Related to the element of Safety Argumentation is the production of evidences.
After specifying the behavior of an SSV, we need to gain trust in the correct
implementation of this specification but also in its effectiveness for making the
system safe. Testing in the context of autonomous systems has been considered a
big challenge in the literature [13] and is one of the reasons why we use monitor-
ing at runtime in the first place. Thus, great care is necessary to assure that the
developed supervisor components can be tested and analyzed. Still, execution in
a controlled environment is necessary before the release. As argued in [30], this
can only be done efficiently with the right methodological and tool support. We
see early multi-domain simulation in the form of virtual validation as one of the
key enablers for this [6].
For the platoon driving system, we derived the high-level safety goal Driving per-
formed by the system is acceptably safe. From this safety goal, three sub-goals are
derived: System is not performing situation-specific unsafe behavior,The driver
is performing the driving activity if the system is not capable to do it sufficiently
safe and System is not producing a situation of unreasonable risk. The first goal
leads to a description of what safe driving means. Thoughts on this part of an
overall safety concept for autonomous vehicles can be found in our earlier work
[1]. The second goal refers to a safe operator-in-the-loop concept and is a cru-
cial part for systems up to automation level three of the SAE standard [22]. A
methodology for deriving safe operator-in-the-loop concepts is currently being
developed by the authors in parallel to this work. The last sub-goal is attached
to the SSV. At the early functional abstraction level, evidence for the fulfillment
of these goals needs to be created with the help of simulation. More thoughts on
this and thus on the evaluation part of the SSV DEF will be presented in the
following subsection.
2.2 Simulation Results
The narrative description of the instantiation of the Safety Supervisor meta-
model given in the previous subsection was translated into an executable Simulink
model. The resulting system with the SSV in place was used for the simulation.
The results of the simulation of a specific scenario can be seen in Figure 2.
Fig. 2. Simulation Results of the Platoon Driving System
The executed scenario is represented by the given acceleration of the leader
truck drawn on the right Y-axis. The platoon driving system shall adequately
adapt the acceleration of the following truck to this. The optimization goal is
to minimize the distance while avoiding forward collisions. Without the Safety
Supervisor in place, the system performs well regarding the first optimization
goal, but regarding freedom from collision, a violation occurs at the end of the
simulated scenario. The SSV, which uses the enhanced time-to-collision metric
as in equation 2, avoids this collision. As a drawback, the distance increases to
an unacceptable value as the SSV destabilizes the control algorithm. Using the
time-to-collision metric as in equation 1 shows good performance regarding both
collision avoidance and minimization of the distance between the trucks.
On the one hand, this simulation result provides evidence that the use of an
SSV can be beneficial for the safety of an autonomously acting system. More
important than this is the fact that it illustrates the need to analyze the de-
sign alternatives of a Safety Supervisor component as early in the development
process as possible. Great care needs to be taken to maintain both safe and
adequate behavior of the overall system. Different design options exist for the
definition of a Safety Supervisor, as has been shown by pointing to related work
for the elements of the metamodel. It cannot be expected that any of these ap-
proaches will be superior in all cases, but careful selection needs to take place.
Such a selection can be made on a more stable basis if evidence created by an
evaluation framework exists. In order to perform an appropriate evaluation and
produce sufficient evidence, more powerful simulation tools and more reasoning
about the safety argumentation are needed. As an important part of a future
SSV DEF we see an evaluation platform capable of delivering results regard-
ing the effectiveness of a particular design decision. As we are focusing on the
functional, i.e., the algorithmic level, we can abstract from details such as sen-
sor effects. This favors simulation solutions such as Pelops for automotive [7]
or FERAL as a more general solution [14] over solutions such as V-REP [20],
which focuses on a detailed physical simulation of the system. The question re-
mains what to simulate in such a tool. For autonomous vehicles, this question is
currently being investigated in the Pegasus project [19]. As part of this project,
different OEMs are cooperating to build a database with relevant driving situ-
ations that shall be successfully executed by an autonomous vehicle to increase
trust in its safe behavior. As part of our future work, an appropriate simulator
has to be chosen and needs to be integrated with a meaningful set of scenarios
in the SSV definition framework.
3 Related Work
An analysis of related work on supervisors that are concerned with safety led
to the observation that these approaches share common aspects that are rep-
resented in the metamodel presented above. Safety Supervisor approaches can
be found, for example, in [18] [29] [24] [32] [10]. Of these approaches, only [18]
considers safety monitoring at runtime in general. The other approaches are in-
stantiations of the concept for the automotive domain, i.e., active safety systems
for collision avoidance. All approaches need models to assess the current safety
of the system. We refer to this collection of models as the Safety World Model.
How this Safety World Model is built differs among the solutions, but patterns
can be found that helped us to derive our conceptual framework. We claim that
a complete Safety World Model needs models for Situation Description,Situa-
tion Prediction, and Situation Risk Assessment. This is also in accordance with
Situation Awareness Theory [5]. In this theory, it takes three steps to create situ-
ation awareness: Perception, Comprehension, and Projection. The Safety World
Model in the metamodel is our equivalent of Situation Awareness with a focus
on safety. Thus, the three elements of the Safety World Model map to the three
steps: Situation Description enables Perception, Situation Prediction allows Pro-
jection, and Situation Risk Assessment is our special form of Comprehension for
safety. Furthermore, we will demonstrate how the three elements are represented
in the related work listed above.
In [18], the authors explicitly model the possible state space that the system can
encounter with the values of observable variables. They present a methodology
for deriving properties that clearly classify the situation space into safe states,
warning states, and catastrophic states. In their work, the Situation Descrip-
tion is done via the variables and value ranges. Situation Predictions are paths
leading from one state to another. In this prediction, they are not probabilistic
but are concerned with the reachability of critical states. In addition, the risk
of situations or states is not quantified but qualitatively assessed by assigning
it to either the safe, the warning, or the catastrophic class of states. In [29], the
authors propose a Safety Decision Unit to safeguard a truck platooning scenario.
In this decision unit, they quantify the risk of the current situation using dif-
ferent metrics such as the Break Threat Number or the worst-case impact speed
(Situation Risk Assessment). Implicitly, they limit the analysis of the current
situation to the truck driving in front of the next vehicle in the truck platoon
(Situation Description). The prediction made from this situation is a worst-case
prediction where at any point in time it is assumed that the leading truck may
initiate maximum braking. Under the prerequisite that the leading truck commu-
nicates its environmental perception, they propose a method for a more precise
calculation of the probability of a braking maneuver by the leading truck. This
can help to reduce the false positives created by the supervisory component by
lowering the criticality of certain situations (Situation Prediction). The Safety
Supervisor approach presented in [24] and its implementation presented in [32]
motivated parts of the Safety World Model presented above. In their work, the
authors describe the current situation by assigning probabilities for maneuvers to
all vehicles in the driving scene. Based on these maneuvers, they then determine
probabilities for trajectories and quantify the situation’s risk with an extension of
the time-to-collision metric called Time-to-Critical-Collision Probability. Thus,
the three elements of the Safety World Model are explicitly represented in this
solution. Another example was given when we instantiated the metamodel with
a forward collision avoidance system for a truck platooning system following the
ISO 22839 standard [10] in the previous section.
Thus, we can see patterns in what is needed to determine the risk of a current
situation, i.e., the elements of which a Safety World Model consists. Additionally,
a Safety Supervisor needs a Risk Reduction Strategy, which is also part of the
presented safety monitoring approaches. In their work [16], Kurd et al. present
GSN-based argumentation on how dynamic risk management assures the safety
of a system. We see our Safety Supervisor as a component realizing dynamic risk
management, and we further see the need to create proper safety argumentation
that sets the SSV in relation to the supervised system and argues how the safety
of the overall system is achieved. As for the safety criticality of the supervisor
component, we see this as a necessary and natural part of the development that
shall be supported by an SSV DEF through the production of evidences.
4 Conclusion
We see that monitoring at runtime will become necessary to create trust in the
safety of autonomously acting systems. In this work, we have presented a concep-
tual framework by means of a metamodel to define a component for safety mon-
itoring – a Safety Supervisor (SSV) – as an instantiation of the metamodel. The
central part of the metamodel – the Safety World Model – has been developed by
deriving patterns in an analysis of related work on existing Safety Supervisor so-
lutions. The Safety Supervisor Definition and Evaluation Framework (SSV DEF)
shall assist in conducting what-if analyses in the design of a concrete supervisor
component. To be able to validate design alternatives in a cost-efficient way, we
are focusing on the functional behavior of the supervisor and on the develop-
ment of a functional specification of the SSV in order to validate its conceptual
feasibility and to evaluate design alternatives through simulation. Furthermore,
the conceptual framework can assist in guiding future research in the field and
putting existing work into context. We demonstrated the complexity of the def-
inition of an SSV by highlighting an initial set of design alternatives of those
components that form the SSV. We exemplified the instantiation of the meta-
model by defining a supervisor for a platoon driving system and used simulation
results to demonstrate the methodological need to assess the design alternatives
early in the SSV development process. To pave the way for enriching the frame-
work with predefined functionality, we introduced related work in the specific
areas of interest. Following this path will lead from a conceptual framework to
a library-like definition framework for the functionality of a Safety Supervisor
for autonomous systems. We see great potential in such an approach as it allows
considering different design alternatives for a safety monitoring system based on
evidences early in the development process. As the concrete instantiations of the
metamodel elements will naturally be highly domain-specific, we will focus on
the automotive domain and thus on autonomous vehicles in future work. The
conceptual framework presented in this work, however, is domain-independent
and can be implemented for other domains as well.
Another promising direction, staying on the conceptual level, is to consider the
elements of the metamodel not as being static but rather as elements that can
adapt during runtime. Turning the individual models into models-at-runtime
allows the supervisor to adapt to changes in the environment and learn from
experience. Such an open and adaptive SSV can be embedded into existing con-
ceptual frameworks for the safety assurance of open adaptive systems [28].
We also see great potential in the use of an SSV in the development of au-
tonomous vehicles that follow an end-to-end deep learning approach, as demon-
strated in [3]. In the learning process, a well-defined SSV can supervise the
learning and guarantee that, regardless of the precise learning objectives, no
unsafe behavior is learned.
Acknowledgments
The work presented in this paper was created in context of the Dependability En-
gineering Innovation for CPS - DEIS Project, which is funded by the European
Commission.
References
1. Adler, R., Feth, P., Schneider, D.: Safety engineering for autonomous vehicles.
Workshop on Safety and Security of Intelligent Vehicles (2016)
2. Adler, R., Schaefer, I., Schule, T.: Model-based development of an adaptive vehicle
stability control system. Modellbasierte Entwicklung von eingebetteten Fahrzeug-
funktionen (2008)
3. Bojarski, M., Testa, D.D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel,
L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to
end learning for self-driving cars
4. Eidehall, A.: Multi-target threat assessment for automotive applications. IEEE
International Conference on Intelligent Transportation Systems (2011)
5. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Human
Factors: The Journal of the Human Factors and Ergonomics Society 37(1) (1995)
6. Feth, P., Bauer, T., Kuhn, T.: Virtual validation of cyber phyiscal systems. Soft-
ware Engineering & Management (2015)
7. FKA: Pelops, http://www.fka.de/pdf/pelops whitepaper.pdf
8. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: : An effi-
cient probabilistic 3d mapping framework based on octrees. Autonomous Robots
34(3) (2013)
9. ISO: 26262: Road vehicles - functional safety (2009)
10. ISO: 22839: Intelligent transport systems - forward vehicle collision mitigation
systems - operation, performance, and verification requirements (2013)
11. Johansson, R., Nilsson, J.: The need for an environment perception block to address
all asil levels simultaneously. IEEE Intelligent Vehicles Symposium (2016)
12. Jungnickel, R., Kohler, M., Korf, F.: Efficient automotive grid maps using a sensor
ray based refinement process. IEEE Intelligent Vehicles Symposium (2016)
13. Koopman, P., Wagner, M.: Challenges in autonomous vehicle testing and valida-
tion. SAE International Journal of Transportation Safety 4(1), 15–24 (2016)
14. Kuhn, T., Forster, T., Braun, T., Gotzhein, R.: Feral - framework for simulator
coupling on requirements and architecture level. IEEE/ACM International Con-
ference on Formal Methods and Models for Codesign (MEMOCODE) (2013)
15. Kuhnt, F., Pfeiffer, M., Zimmer, P., Zimmerer, D., Gomer, J.M., Kaiser, V.,
Kohlhaas, R., Zollner, M.J.: Robust environment perception for the audi au-
tonomous driving cup. IEEE International Conference on Intelligent Transporta-
tion Systems (2016)
16. Kurd, Z., Kelly, T., McDermid, J., Calinescu, R., Kwiatkowska, M.: Establish-
ing a framework for dynamic risk management in ’intelligent’ aero-egine control.
International Conference on Computer Safety, Reliability and Security (2009)
17. Lef`evre, S., Vasquez, D., Laugier, C.: A survey on motion prediction and risk
assessment for intelligent vehicles. ROBOMECH Journal (2014)
18. Mekki-Mokhtar, A., Blanquart, J.P., Guiochet, J., Powell, D., Roy, M.: Safety
trigger conditions for critical autonomous systems. IEEE Pacific Rim International
Symposium on Dependable Computing (2012)
19. Pegasus: Pegasus research project (2017), http://www.pegasus-projekt.info/en/
20. Rohmer, E., Surya, P.N.S., Freese, M.: V-rep: a versatile and scalable robot simu-
lation framework. IEEE/RSJ International Conference on Intelligent Robots and
Systems (2013)
21. Rushby, J.: Runtime certification. Workshop on Runtime Verification (2008)
22. SAE: J3016: Taxonomy and definitions for terms related to driving automation
systems for on-road motor vehicles (2016)
23. Schneider, D., Trapp, M., Papadopoulos, Y., Armengaud, E., Zeller, M., Hofig, K.:
Wap: Digital dependability identities. IEEE International Symposium on Software
Reliability Engineering (2015)
24. Schreier, M., Willert, V., Adamy, J.: Bayesian, maneuver-based, long-term trajec-
tory prediction and criticality assessment for driver assistance systems. IEEE In-
telligent Vehicles Symposium (2014)
25. Sha, L.: Using simplicity to control complexity. IEEE Software 18(4) (2001)
26. Stolte, T., Bagisch, G., Maurer, M.: Safety goals and functional safety requirements
for actuation systems of automated vehicles. IEEE International Conference on
Intelligent Transportation Systems (2016)
27. Tamke, A., Dang, T., Breuel, G.: A flexible method for criticality assessment in
driver assistance systems. IEEE Intelligent Vehicles Symposium (2011)
28. Trapp, M., Schneider, D.: Safety assurance of open adaptive systems – a survey.
Models@run.time (2014)
29. van Nunen, E., Tzempetzis, D., Koudijs, G., Nijmeijer, H., van den Brand, M.:
Towards a safety mechanism for platooning. IEEE Intelligent Vehicles Symposium
(2016)
30. Wachenfeld, W., Winner, H.: The release of autonomous vehicles. In: Maurer, M.,
Gerdes, C.J., Lenz, B., Winner, H. (eds.) Autonomous Driving. Springer Open
(2015)
31. Wiest, J., Karg, M., Kunz, F., Reuter, S., Kreßel, U., Dietmayer, K.: A probabil-
isitc maneuver prediction framework for self-learning vehicles with application to
intersections. IEEE Intelligent Vehicles Symposium (2015)
32. Winner, H., Lotz, F., Bauer, E., Konigorski, U., Schreier, M., Adamy, J., Pfromm,
M., Bruder, R., Lueke, S., Cieler, S.: Proreta 3: comprehensive driver assistance by
safety corridor and cooperative automation. In: Winner, H., Hakuli, S., Lotz, F.,
Singer, C. (eds.) Handbook of Driver Assistance Systems. Springer International
Publishing (2016)
... Eventually, only a few papers (i.e., [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43]) fulfilled the criteria and were evaluated. Hartsell et al. [34] present ReSonAte, a dynamic risk estimation and assessment framework for autonomous systems. ...
... Shimizu et al. [38] present an evaluation framework of the performance limitations of autonomous systems that combines safety analysis and sensor attack simulation. Feth et al. [39] propose a conceptual framework, i.e., a metamodel, to support early design decisions for systematically deriving Safety Supervisors (SSV) for autonomous systems. From an engineering perspective, the framework allows one to arrive at an evidence-based decision about which algorithms to choose for the further development of a safety monitor, by conducting what-if analyses and comparing different meaningful combinations of available solutions. ...
... The framework in [38] essentially aims to evaluate how the system safety is affected by previously identified sensor attacks scenarios. The framework in [39], instead, is more directed towards runtime safety monitoring. These works suggest that evaluating systems safety, especially for safetycritical systems, can address further system improvements during future development. ...
Article
Full-text available
This paper proposes an evaluation framework for autonomous systems, called LENS. It is an instrument to make an assessment of a system through the lens of abilities related to adaptation and smartness. The assessment can then help engineers understand in which direction it is worth investing to make their system smarter. It also helps to identify possible improvement directions and to plan for concrete activities. Finally, it helps to make a re-assessment when the improvement has been performed in order to check whether the activity plan has been accomplished. Given the high variability in the various domains in which autonomous systems are and can be used, LENS is defined in abstract terms and instantiated to a specific and important class of medical devices, i.e., Programmable Electronic Medical Systems (PEMS). The instantiation, called LENS <sub>PEMS</sub> , is validated in terms of applicability , i.e., how it is applicable to real PEMS, generalizability , i.e., to what extent LENS <sub>PEMS</sub> is generalizable to the PEMS class of systems, and usefulness , i.e., how it is useful in making an assessment and identifying possible directions of improvement towards smartness.</p
... With 11 primary studies, ML quality assurance frameworks constitute another important topic. These frameworks normally focus on specific quality aspects of ML products, such as allowability, achievability, robustness, avoidability and improvability [149], safety [47,136], specific safety issues like forward collision mitigation based on the ISO 22839 standard [57], security [51], algorithmic auditing [163], robustness diversity [189], data validation [29], or the reconciliation of product and service aspects [143]. Other approaches focus on continuous quality assurance with simulations [12] and on run-time monitoring to manage identified risks [107]. ...
... As with other software systems, ML-based systems require monitoring. We can find monitoring approaches combining ML with runtime monitoring to detect violations of system invariants in the actions' execution policies [132], managing identified risks, catching assumption violations, and unknown unknowns as they arise in deployed systems [107], and as a runtime safety [57] or ethical [12] supervisors. ...
Article
Full-text available
AI-based systems are software systems with functionalities enabled by at least one AI component (e.g., for image-, speech-recognition, and autonomous driving). AI-based systems are becoming pervasive in society due to advances in AI. However, there is limited synthesized knowledge on Software Engineering (SE) approaches for building, operating, and maintaining AI-based systems. To collect and analyze state-of-the-art knowledge about SE for AI-based systems, we conducted a systematic mapping study. We considered 248 studies published between January 2010 and March 2020. SE for AI-based systems is an emerging research area, where more than 2/3 of the studies have been published since 2018. The most studied properties of AI-based systems are dependability and safety. We identified multiple SE approaches for AI-based systems, which we classified according to the SWEBOK areas. Studies related to software testing and software quality are very prevalent, while areas like software maintenance seem neglected. Data-related issues are the most recurrent challenges. Our results are valuable for: researchers, to quickly understand the state-of-the-art and learn which topics need more research; practitioners, to learn about the approaches and challenges that SE entails for AI-based systems; and, educators, to bridge the gap among SE and AI in their curricula.
... However, this concept demands the online monitor to be developed in accordance with the standards. Feth et al. [220] describe a concept for a "safety supervisor" for forward collision mitigation capable of safeguarding AI-based functions. Again, although the concept is still mainly described in a superficial way, the conformity with the standards and the intention for approval are mentioned. ...
... occup. [203][204][205][206] Motion modelling [207][208][209] Classification [210][211][212][213][214] Indicator [215][216][217][218] Monitor concept [219,220] Modal logic [221][222][223][224][225][226][227] Formal rules [228][229][230][231] Reachable sets [232][233][234][235][236][237][238][239][240][241][242] Legend: ...
Thesis
Full-text available
Full-text and details: https://mediatum.ub.tum.de/?id=1633430
... If v ego < v f and z 1 < t stop, f , then the safe distance is the same as (37). ...
... By contrast, methods relying on formal and deterministic fundamentals can provide guarantees based on imposed requirements. Among them are reachable sets [5,34,35], runtime verification [36], and metric-based approaches [37], including the RSS model [17]. However, some of these approaches tailored to a specific software lack flexibility and cannot be bundled with other software components or approaches. ...
Article
Full-text available
While self‐driving cars have already been widely investigated and achieved spectacular progress, a major obstacle in applications is the great difficulty in providing formal guarantees about their behaviors. Since the environment of the self‐driving is usually not known beforehand and highly uncertain, classical verification approaches cannot be applied to guarantee safety. To cope with any traffic situation, a novel online verification framework is presented for verifying behavioral safety of self‐driving cars. The framework is based on the proposed five safety considerations: new longitudinal and lateral safe distances, lane changes, overtaking and how to face new traffic participants. Different from the previous verification considerations, this verification framework allows actual behaviors of self‐driving cars to be temporarily inconsistent with the popular strict safe distance. As long as the self‐driving car respects the minimum safe distance calculated by our technique and executes improvement behaviors to restore the safe distance, it is still believed that the predictive behavior is safe. The framework can easily be integrated to existing self‐driving systems and evaluate different indicators involving the steering angle, acceleration and braking. The benefits of the framework in different urban scenarios of the CARLA simulator and real traffic data provided by the NGSIM project are demonstrated. Results show that the technology can successfully detect unsafe behaviors and provide effective measures to avoid potential collisions.
... The complex environment regarding the higher complexity of input data, complex software due to required complex logic, and non-deterministic behavior all factors show the importance of analyzing the complexity for ensuring safety [13]. So, the unpredictability of the behavior of autonomous systems causes a complexity increase of simulation techniques [14] Making complex decisions and meeting many requirements brings a new class of complexity problems for software of autonomy [15]. In [16], runtime validation is proposed as an approach for addressing the complexity problem. ...
Conference Paper
The emerging autonomous mobile robots promise a new level of efficiency and flexibility. However, because these types of systems operate in the same space as humans, these mobile robots must cope with dynamic changes and heterogeneously structured environments. To ensure safety despite these challenges, new approaches are needed that model risk at runtime. This risk depends on the situation and that is a situational risk. In this paper, we propose a new methodology to model this situational risk based on multi-agent adversarial reinforcement learning. In this methodology, two competing groups of reinforcement learning agents, namely the protagonists and the adversaries, fight against each other in the simulation. The adversaries represent the disruptive and destabilizing factors, while the protagonists try to compensate for them and make the system robust. The situational risk can then be derived from the outcome of the simulated struggle. Risk modeling thereby differentiates the four steps of intelligent information processing: sense, analyze, process, and execute. To find the appropriate adversaries and actors for each of these steps, this methodology builds on Systems Theoretic Process Analysis (STPA). Using STPA, we identify critical signals that lead to losses when a disturbance under certain conditions or in certain situations occurs. At this point, the challenge of managing the complexity arises. We face this issue using training effort as a metric to evaluate it. Through statistical analysis of the identified signals, we derive a procedure for defining action spaces and rewards for the agents in question. We validate the methodology using the example of a Robotino 3 Premium from Festo, an autonomous mobile robot.
... Related work proposes several approaches to tackle safeguarding or approval of AV SW. While offline methods (e.g., formal offline approval) cannot cope with continuing learning during runtime, online monitoring methods are considered a promising approach [8], [21]. Parallel online monitoring (also known as doer/checker principle) goes well with the principles of ASIL-decomposition in the current version of the ISO 26262 [11], [22]. ...
Article
Full-text available
Safety guarantees and regulatory approval for autonomous vehicles remain an ongoing challenge. In particular, software that is frequently adapted or contains complex, non-transparent components, such as artificial intelligence, is exceeding the limits of safety standards. This paper presents a detailed implementation of an online verification module-the Supervisor-that copes with these challenges. The presented implementation focuses on autonomous race vehicles without loss of generality. Following an identified holistic list of safety-relevant requirements for a trajectory, metrics are developed to monitor whether the trajectory can safely be executed. To evaluate safety with respect to dynamic objects in a semi-structured and highly dynamic racing environment, rule-based reachable sets are presented. As a result, the pure reachable set is further constrained by applicable regulations. Real-time capability and effectiveness are demonstrated in fault-injected scenario-based tests and on real-world run data. The implemented Supervisor will be publicly available on GitHub.
Chapter
Full-text available
Conventional safety engineering is not sufficient to deal with Artificial Intelligence (AI) and Autonomous Systems (AS). Some authors propose dynamic safety approaches to deal with the challenges related to AI and AS. These approaches are referred to as dynamic risk management, dynamic safety management, dynamic assurance, or runtime certification [4]. These dynamic safety approaches are related to each other and the research in this field is increasing. In this paper, we structure the research challenges and solution approaches in order to explain why dynamic risk management is needed for dependability of autonomous systems. We will present 5 research areas in this large research field and name for each research area some concrete approaches or standardization activities. We hope the problem decomposition helps to foster effective research collaboration and enables researchers to better navigate the challenges surrounding dynamic risk management.
Chapter
The drive for automation in industry and transport results in an increasing demand for cooperative systems that form cyber-physical systems of systems. One of the characteristic features of such systems is dynamic reconfiguration, which facilitates emergent behavior to respond to internal variations as well as to environmental changes. By means of cooperation, systems of systems can achieve greater efficiency regarding fulfillment of their goals. These goals are not limited to performance, but must also include safety aspects to assure a system of systems to operate safely in various configurations. In this paper, we present a reconfiguration approach which includes consideration of dynamic modular safety cases. During operation, configuration of system of systems will adapt to changes, selecting the most appropriate service composition from the set of possible compositions derived from blueprints. Variations of service compositions lead to changes in the associated safety cases, which are evaluated at run-time and taken into account during configuration selection. With this approach, safe operation of cyber-physical systems of systems with run-time reconfiguration can be guaranteed.
Conference Paper
Full-text available
One of the biggest challenges towards fully automated driving is achieving robustness. Autonomous vehicles will have to fully recognize their environment even in harsh weather conditions. Additionally, they have to be able to detect sensor and algorithm failures and react properly to keep the vehicle in a safe state. These two challenges are addressed exemplarily on miniature cars. We extend the approach of Compositional Hierarchical Models [1] by temporal fusion to achieve a robust environment perception. The increased association problem is overcome by a grid-based approximation and a voting system. System performance assessment surveils the system’s performance and reacts with driving function degradation or activation of specialized algorithms. The approach was evaluated at the final of the Audi Autonomous Driving Cup 2016. A video shows the advanced driving capabilities under harsh environment conditions and the source code is available for download.
Chapter
Full-text available
In the future, the functions of autonomous driving could fundamentally change all road traffic; to do so, it would have to be implemented on a large scale, in series production.
Conference Paper
Increasing automation of vehicle guidance is one of the major trends in the automotive industry. Some auto makers have announced that automated vehicles will be deployed in public traffic by the end of this decade (level 4 in sense of the definition of SAE, level 5 later). Until then, one central challenge is ensuring functional safety of automated vehicles. Still, it is not clear how safety concepts for automated vehicles can be designed appropriately. This affects all parts of vehicle automation systems: environment perception, decision making, and actuation. In this contribution we derive safety goals and functional safety requirements according to ISO 26262 for ac-tuation systems of automated vehicles systematically, following a systems theory based approach. The findings summarize elaborate measures to be implemented in actuation systems of automated vehicles when operated without human supervision.
Conference Paper
This contribution proposes a novel algorithm for predicting maneuvers at intersections. With applicability to driver assistance systems and autonomous driving, the presented methodology estimates a maneuver probability for every possible direction at an intersection. For this purpose, a generic intersection-feature, space-based representation is defined which combines static and dynamic intersection information with the dynamic properties of the observed vehicle, provided by a tracking module. A statistical behavior model is learned from previously recorded patterns by approximating the resulting feature space. Because the feature space consists of different types of features (mixed-feature space), a Bernoulli-Gaussian Mixture Model is applied as approximating function. Further, an online learning extension is proposed to adapt the model to the characteristics of different intersections.
Conference Paper
In order to perform safety assessment of vehicles for highly automated driving, it is critical that the vehicle can be proven to adapt its driving according to the sensed objects that might become a hinder. There is a complicated relation between the confidence of what hinders that might exist coming out of an environment perception block, and the tactical decisions about the driving style done by the autonomous vehicle. A good strategy that enables safety assessment according to ISO26262 implies that the environment perception block should address its safety requirements for all the ASIL attribute values simultaneously. In this paper we argue why every functional safety requirement allocated to an environment perception block should preferable be instantiated four times, each with a different ASIL value.
Conference Paper
The occupancy grid mapping technique is widely used for environmental mapping of moving vehicles. Occupancy grid maps with fixed cell size have been extended using the quadtree implementation with adaptive cell size. Adaptive grid maps have proven to be more resource efficient than fixed cell size grid maps. Dynamic cell sizes introduce the necessity of a split and merge process to trigger the refinement of grid cells. This paper presents a novel ray-based refinement process in order to choose the appropriate resolution for the sensor observation. Based on measurement conflicts some approaches use an iterative refinement process until all conflicts are solved. In contrast this paper presents an non-iterative approach based on the sensor resolution. Using the measurement data efficiently we propose an algorithm, which solves the problem of partially free cells in an adaptive grid map. The proposed algorithm is compared against other widely used algorithms and methodologies.
Conference Paper
Platooning has shown to be technically feasible, but safety aspects are still challenging. Wireless communication between vehicles allows to maintain reduced inter-vehicle distances, thereby improving traffic throughput and decreasing fuel consumption. As the driver can no longer be a backup at short inter-vehicle distances, the system needs to be fail-safe for both hazardous traffic situations as well as failures. In this paper, a scenario is defined which combines a hazardous traffic situation with a communication failure. First, the methodology for developing safety related functionality in automated driving is presented. This methodology combines aspects of the ISO26262 standard with the Harmony profile. Second, the safety mechanism to avoid a collision by braking is described. This ensures that a safe state can be reached for a set of use cases which are derived from the defined scenario. Finally, the proposed solution is tested in a simulation environment and is also implemented on test vehicles. The result of the simulations and experiments demonstrate the practical validity and show increased safety related functionality.