Conference PaperPDF Available

Towards a Taxonomy of Autonomous Systems

Authors:

Abstract and Figures

In this paper, we present a precise and yet concise characterisation of autonomous systems. To the best of our knowledge, there is no similar work, which through a mathematical definition of terms provides a foundation for describing the systems of the future: autonomous software-intensive systems and their architectures. Such systems include robotic taxi as an example of 2D mobility, or even drone/UAV taxi, as an example in the field of 3D urban air mobility. The presented terms lead to a four-level taxonomy. We describe informally and formally the taxonomy levels and exemplarily compare them to the degrees of automation as previously proposed by the SAE J3016 automotive standard. KeywordsAutonomous systemsTaxonomyArchitecture
Content may be subject to copyright.
Towards a Taxonomy of Autonomous
Systems
Stefan Kugele1(B
), Ana Petrovska2, and Ilias Gerostathopoulos3
1Research Institute AImotion Bavaria, Technische Hochschule Ingolstadt,
Ingolstadt, Germany
Stefan.Kugele@thi.de
2Department of Informatics, Technical University of Munich,
Garching bei M¨unchen, Germany
ana.petrovska@tum.de
3Faculty of Science, Vrije University in Amsterdam, Amsterdam, The Netherlands
i.g.gerostathopoulos@vu.nl
Abstract. In this paper, we present a precise and yet concise charac-
terisation of autonomous systems. To the best of our knowledge, there is
no similar work, which through a mathematical definition of terms pro-
vides a foundation for describing the systems of the future: autonomous
software-intensive systems and their architectures. Such systems include
robotic taxi as an example of 2D mobility, or even drone/UAV taxi, as an
example in the field of 3D urban air mobility. The presented terms lead
to a four-level taxonomy. We describe informally and formally the taxon-
omy levels and exemplarily compare them to the degrees of automation
as previously proposed by the SAE J3016 automotive standard.
Keywords: Autonomous systems ·Tax onomy ·Architecture
1 Introduction
The world is changing, and so are systems. Woods [9] describes in his much-
noticed article “Software Architecture in a Changing World” the evolution from
monolithic systems back in the 1980s to intelligent connected systems in the
2020s. We share Woods’s vision for future systems. Today’s connected cyber-
physical systems (CPSs) are not too far away from this vision. The missing link
between the current systems and the autonomous systems that we outline for
the future is twofold: First, systems will be capable of adapting their structure
and behaviour in reaction to changes and uncertainties emerging from their
environment and the systems themselves [4,6,8] – they will be adaptive systems.
Second, they will be able to derive knowledge themselves during their operational
time to infer actions to perform.
The modern CPSs, such as cooperative robotic systems or intelligent trans-
portation systems, are per se distributed. The not too distant future probably
brings hitherto unrivalled levels of human-robot interaction. In such scenarios,
c
Springer Nature Switzerland AG 2021
S. Biffl et al. (Eds.): ECSA 2021, LNCS 12857, pp. 37–45, 2021.
https://doi.org/10.1007/978-3-030-86044-8_3
38 S. Kugele et al.
machines and humans share the same environment, i.e., operational context [1,4].
Examples for those shared environments are (i) production systems (cf. Industry
4.0) or (ii) intelligent transportation systems with both autonomous and human-
operated mobility. As a result, autonomous behaviour becomes an indispensable
characteristic of such systems.
The lack of shared understanding of the notion of autonomy makes it difficult
for the works across various domains to be compared or even discussed since
the same term is used with different semantics. For example, very often in the
literature, Unmanned Aerial Vehicles (UAVs) are misleadingly referred to as
autonomous, although an end user completely controls their flying operation.
As another example, we take robots operating in a room, which use Adaptive
Monte Carlo Localisation (AMCL) to localise themselves and navigate in the
space. Even though the robots localising and navigating independently in the
room is some form of autonomy, they simply cannot be called fully autonomous
systems if they operate in a room in which they often collide or get in deadlocks.
In these situations, human administrators need to intervene in order for the
robots to be able to continue with their operation. The intervention from a user
(i.e., human administrator) directly affects the system’s autonomy.
In response, we present in this paper our first steps towards a unified,
comprehensive, and precise description of autonomous systems. Based on the
level of user interaction and system’s learning capabilities, we distinguish four
autonomy levels (A0A3): non-autonomous, intermittent autonomous, eventually
autonomous, and fully autonomous. Our goal is to offer a precise and concise
terminology that can be used to refer to the different types/levels of autonomous
systems and to present a high-level architecture for each level.
The remainder of this paper is structured as follows. In Sect. 2we briefly
sketch existing efforts to formalise autonomy and explain the formal notation
we are using later on. In Sect. 3, we present our taxonomy. Finally, in Sect. 4,we
discuss and conclude the paper and outline our further research agenda.
2 Background
2.1 Existing Efforts to Formalise Autonomy
An initial effort in the literature to formally define autonomy was made by Luck
and d’Inverno [5]. In this paper, the authors argue that the terms agency and
autonomy are often used interchangeably without considering their relevance and
significance, and in response, they propose a three-tiered principled theory using
the Z specification language. In their three-tiered hierarchy, the authors dis-
tinguish between objects, agents, and autonomous agents. Concretely, in their
definition of autonomy, as a focal point, the authors introduce motivations
“higher-level non-derivative components related to goals.” Namely, according to
their definition, autonomous agents have certain motivations and some potential
of evaluating their own behaviour in terms of their environment and the respec-
tive motivations. The authors further add that the behaviour of the autonomous
Towards a Taxonomy of Autonomous Systems 39
agent is strongly determined by and dependent on different internal and environ-
mental factors. Although the authors acknowledge the importance of consider-
ing different internal and environmental (i.e., contextual) factors while defining
autonomy, in their formalisms, the importance of the user in defining autonomy
is entirely omitted. On the contrary, in our paper, we put the strongest empha-
sis on the user. Concretely, how the involvement of the user in the operation of
the system diminishes, proportionally to the increase of the system’s autonomy.
We define levels of system’s autonomy by focusing on the system’s function and
how much from the user’s logic is “shifted” to the system in the higher levels of
autonomy. We further touch on the importance of learning, especially when 1)
the systems operate in highly dynamic, uncertain and unknown environments,
and 2) the user’s control on the system reduces. To the best of our knowledge,
there is no prior work that defines different levels of autonomy formally.
2.2 Formal Modelling Approach
Within this paper, we use the formal modelling notation Focus introduced by
Broy and Stølen [2]. We restrict ourselves to only those concepts necessary for the
understanding of this work. In Focus, systems are described by their (i) syntac-
tic and their (ii) semantic interface. The semantic interface of a system is denoted
by (IO) indicating the set of input and output channels,I,O C, where
Cdenotes the set of all channels. Systems are (hierarchically) (de-)composed
by connecting them via channels. A timed stream sof messages mM, e.g.
s=m1m3m4..., is assigned to each channel cC. The set of timed
streams T(M) over messages Massociates to each positive point in time tN+
a sequence of messages M, formally T(M)=N+M. In case of finite timed
streams, Tfin(M) is defined as: Tfin (M)=nN([1: n]M). In the example
given, in the first time slot, m1is transmitted; in the second time slot, nothing
is transmitted (denoted by ), and in the third depicted time slot, two mes-
sages m3m4are transmitted. Untimed streams over messages Mare captured
in the set U(M) which is defined as U(M)=(N+M)nN([1: n]M),
i.e., each time slot is associated with at most one message and there can be
streams of finite length. By
C, we denote channel histories given by families of
timed streams:
C=(C→T(M)). Thus, every timed history x
Xdenotes an
evaluation for the channels in Cby streams. With #s, we denote the number
of arbitrary messages in stream s, with m#sthat of messages m. For timed
streams s∈T(M), we denote with s(t)∈T
fin(M) the finite timed stream until
time t. The system’s behavioural function (semantic interface) fis given by a
mapping of input to output histories:f:
I(
O).
3 A Taxonomy for Defining Autonomy
In this section, we first describe how autonomy of a system is related to autonomy
of its functions, then present the main ideas behind our proposed taxonomy, and
finally describe both informally and formally the different levels of autonomy.
40 S. Kugele et al.
3.1 Autonomy as a Property of Individual Functions
CPSs such as modern cars are engineered in a way to deliver thousands of cus-
tomer or user functions. These are functions that are directly controlled by the
user, or at least the user can perceive their effect. Switching on the radio, for
example, results in music being played. This is a customer function. On the other
hand, there are functions, for example, for diagnosis or for offering encryption
services, which the customer cannot control directly, of whose existence often
nothing at all is known and whose effects are not visible to the user. Consid-
ering the above-mentioned, it is not trivial to classify a complete system as
autonomous or non-autonomous. Instead, autonomy is a property of individual
functions. Let us take a vehicle that drives autonomously. We assume that this
system still offers the functionality to the passengers to choose the radio station
or the playlist themselves. Thus, the CPS operates autonomously in terms of
driving but is still heteronomous in terms of music playback. A similar argu-
mentation applies, for example, to vehicles that are equipped with automation
functions of varying degrees of automation, as considered in the SAE J3016
standard. For this system, as well as for other multi-functional systems, it is not
meaningful to conclude from the autonomy of a single function, the autonomy
or heteronomy of the whole system. Therefore, the commonly used term of an
autonomous vehicle is too imprecise since the term autonomy refers exclusively
to its driving capabilities. Hence, also the SAE proposes not to speak about
“autonomous vehicles” but instead about “level [3, 4, or 5] Automated Driving
System-equipped vehicles” (cf. [7], §7.2).
The only two statements that can be made with certainty are the following:
(1) if all functions of a system are autonomous, then the system can also be called
autonomous, and (2) if no function is autonomous, then certainly the system
is not autonomous. Anything in between cannot be captured with precision.
Single-functional systems are a special case. In such systems, the autonomy or
heteronomy of the single function is propagated to the system. For the sake of
illustrating our taxonomy on a simpler case, we will focus on single-functional
systems in the rest of the paper.
3.2 Main Ideas Behind the Taxonomy for Autonomy
Our first main idea is to define autonomy levels of a system by focusing on the
system’s function and specifically by looking at the level of interaction that a user
has with the system. Intuitively, the more user interaction is in place, the less
autonomous the system is. “More user interaction” can mean both more frequent
interaction and more fine-grained interaction. Actually, these two characteristics
very often go hand in hand: consider, for instance, the case of a drone: it can be
controlled with a joystick with frequent and fine-grained user interaction (lower
autonomy); it can also be controlled via a high-level target-setting routine with
less frequent and more coarse-grained user interaction (higher autonomy).
The second main idea behind our taxonomy is to distinguish between systems
that learn and ones that do not learn. By learning, we mean that systems can
Towards a Taxonomy of Autonomous Systems 41
observe both their context and user actions and identify behavioural patterns
(e.g. rules or policies) in the observed data (e.g. by training and using a classifier).
Such patterns can be used at run-time to reduce the amount of user interaction
with the system gradually. Hence, the more capable a system is of learning
behavioural patterns, the more autonomous it can become.
Finally, the third main idea is to define a system as autonomous within
an assumed operational context. The assumed context can be narrow (e.g. a
drone operating in a wind range of 0–4 Beaufort) or very broad (e.g. a drone
operating under any weather conditions). The specification of the context can
also be uncertain or incomplete, i.e., the designers of the system might not be
able to anticipate and list all possible situations that may arise under a specific
context assumption. In any case, the more broad context is assumed, the harder
it becomes for a system to reach high autonomy.
3.3 Taxonomy Levels
Non-Autonomous (A0)
Intermittent Autonomous (A1)
Eventually Autonomous (A2)
Fully Autonomous (A3)
Fig. 1. Taxonomy levels.
The four levels of autonomous systems in our
taxonomy are shown in Fig.1. Figure 2shows
the interaction between the user u, the con-
text c, and the system s, as well as the (very
high level) architecture of the system at each
level in the taxonomy.
The lowest level, A0, refers to systems
that are not autonomous. For these systems,
user input is needed at all times for controlling their operation. Examples are
using the radio in a car or controlling the movement of a robot via a remote
controller. As can be seen in Fig. 2(a), on this level, the system s(i.e., the system
function sf) is completely controlled by the user and does not assume any input
from the context (although this input might be already taken indirectly into
account by the user). Note that the function sf might internally do something in
the background that does not depend on the user input. A user can control the
movement and trajectory of a drone; however, each drone internally provides
attitude stabilisation that is not dependent on user input but is part of this
system function.
The next level, A1, refers to systems that are intermittent autonomous: they
can operate autonomously in-between two consecutive user inputs. In this case,
the system can receive user input periodically or sporadically. As shown in
Fig. 2(b), part of the logic of the user is shifted to the system as a control
logic cl, which interacts with the system function sf. Input to the control logic
can also be provided by the context. For instance, consider the movement of a
robotic vacuum cleaner: the system perceives its environment through its sen-
sors (obtains context input) and operates autonomously until it gets stuck (e.g.
because of an obstacle or a rough surface); at this point, a user is required to
intervene to restart the robot or point it to the right direction.
Level A2, shown in Fig.2(c), refers to eventually autonomous systems: here,
the user interaction reduces over time until the system reaches a point where
42 S. Kugele et al.
Fig. 2. From user-operation to autonomy: (a) A human user ucontrols the system s
(i.e., the system’s function sf). (b) The control logic is divided between the user u
and the system cl, i.e., u=ucl. (c) The control logic of the system clcould be
enhanced with a learning component to better address e.g. changes in the context
c. (d) The control logic cl with the usually necessary learning component is entirely
performed by the system itself.
it does not require any user interaction (user control). For this to happen, the
system’s control logic clis usually enhanced and equipped with a learning com-
ponent that is able to identify the user interaction patterns associated with
certain system and context states. An example is a robotic vacuum cleaner that
is able to learn how to move under different floor types (e.g. faster or slower)
and avoid crashes that would necessitate user interaction. Clearly, the degree and
sophistication of monitoring and reasoning on context changes and user actions
is much higher than in intermittent autonomous systems.
Finally, level A3refers to fully autonomous systems, where no user input
is needed (except the provision of initial strategic or goal-setting information),
as it can be seen in Fig. 2(d). Systems on this level of autonomy can observe
and adjust their behaviour to any context by potentially integrating learning in
their control logic cl. Please note that the necessity and the sophistication of the
learning is proportionate to 1) the complexity and the broadness of the context,
and 2) the specifications of the context in the systems, as previously explained in
Sect. 3.2. For instance, a robotic vacuum cleaner can move in a fully autonomous
way when its context is more simplistic and could be fully anticipated (e.g.
prescribed environment that contains only certain floor and obstacle types).
To achieve this, the system needs to be equipped with sensing and run-time
reasoning capabilities to adjust its movement behaviour and remain operational
without human interaction. However, the difficulty for the same system to remain
fully autonomous increases proportionally to the complexity of its context. For
example, the context can be dynamic in ways that could not be anticipated,
resulting in uncertain and incomplete context specifications. Since the user on
this level is entirely out of the loop, this would require new, innovative, and more
sophisticated learning methods in the fully autonomous systems.
We note that one can also imagine relatively simple systems without context
impact that are configured once or not at all by a user and then work without
any user interaction or learning (e.g. an alarm clock); while these systems also
technically fall under A2or A3, they are less complex and sophisticated.
Towards a Taxonomy of Autonomous Systems 43
3.4 Formalisation of Taxonomy Levels
The intuitively described taxonomy levels are specified mathematically in the
following. We denote with uthe input stream from the user to the system.
Definition 1 (Non-autonomous, A0). A system is called non-autonomous,
iff it solely depends on user inputs: tN+:u(t)=.
If there is less and less intervention or input by users, this becomes necessary
repeatedly; we speak of intermittent autonomy.
Definition 2 (Intermittent Autonomous, A1). A system is called intermit-
tent autonomous, iff user interaction is necessary from time to time (periodic or
sporadic), i.e.: tN+t,t >t,t,t N+,t=t :u(t)= ∧ u(t )=.
We emphasised that learning is essential in order to reach even higher levels
of autonomy. By learning, the system converges to a point tafter which no user
interaction is needed anymore. Such systems are called eventually autonomous.
Definition 3 (Eventually Autonomous, A2). A system is called eventually
autonomous, iff after time t N+no user input or intervention is needed any-
more to fulfil the mission goals: tN+:t>t:u(t)=.
In other words, only a finite number nof messages were transmitted up to tand
no further messages will be transmitted beyond that time: #u(t)=n, with
nN. The smaller tis, the earlier the point of autonomy is reached. If this is
already the case from the beginning, we speak of fully autonomous systems.
Definition 4 (Fully Autonomous, A3). A system is called fully autonomous
if no user interaction or intervention is necessary at all, i.e., tN+:u(t)=.
Eventual and full autonomy make strict demands on the ability to precisely
perceive and analyse the context, and draw conclusions and learn from it. How-
ever, in many respects, it will probably not be possible to achieve them in the
foreseeable future for a not highly restricted operational context. Reasons for this
are manifold and include the limited ability to fully perceive and understand the
context and be prepared for all conceivable operational situations. Therefore,
let us now consider intermittent autonomy. Assume the case that every other
time step (e.g. every second minute), there is user interaction on an infinite
timed stream, see u1below. This results in an infinite number of interactions. In
another case, there could be one interaction every millionth minute, as shown in
u2. These two cases are equivalent or indistinguishable by definition.
u1=mm ...m ...,u2=m1061m1061...m1061...
This is due to Cantor’s concept of infinity. Intuitively, however, a system that
depends on user input every two minutes acts less autonomously than a system
that can operate for almost two years (1.9 years in u2) independently. Therefore,
intermittent autonomy extends from “almost” no autonomy towards “almost”
44 S. Kugele et al.
eventually autonomy. The classification in this spectrum can be made more
precise if we take a closer look at the frequency of user input. Because of the
above discussion on infinity, we only consider prefixes of finite length of (in)finite
streams, i.e., u(t). Let α(0,1) be the ratio between times without user input
and the interval [1; t], i.e., α=#u/t. The closer αgets to one, the more
autonomous the system is.
4 Discussion and Conclusion
Comparison to SAE Levels (L0–L5) [7].No driving automation (L0) refers
to A0no autonomy, L1/2 (driver assistance, partial driving automation) can
be defined with the notion of intermittent autonomy–A1, conditional driving
automation (L3), applies for α1 in a limited operational context such as high-
way autopilots. Finally, high driving automation (L4) and full driving automa-
tion (L5) are captured by our level A3,full autonomy. For both, different assump-
tions, w.r.t. the context or the operational design domain, need to be made.
Future Extensions. It would be relevant to investigate the relation between the
higher levels of autonomy and self-* properties (cf. [3]) of the systems, e.g.
self-adaptation. In our current understanding, adaptivity is a precondition for a
higher autonomy since it enables the system to deal with various unanticipated
changes and uncertainties; however, a clear distinction and definition of these
two notions is still open. Another open issue refers to the notion of messages
exchanged in intermittent autonomous systems. We have tried to distinguish
between two intermittent autonomous systems based on their frequency of mes-
sage exchange, but the expressiveness of messages is also important. Not every
message has to have the same “information content”. It is a matter for future
research and discussion whether this point can be captured using, e.g. Shannon’s
definition of information content (a limitation of this approach is the assump-
tion of statistical independence and idempotence of messages). To what extent
or when is this a permissible limitation is an open question.
Conclusion. In this paper, we proposed a taxonomy that supports the formal
specification of different levels of autonomous systems. We have also proposed a
high-level architecture for each level to exemplify the user, context, and system
interaction. Our goal is to propose a terminology that, if broadly accepted, can
be used for more effective communication and comparison of autonomy levels
in software-intensive systems that goes beyond the well-known SAE J3016 for
automated driving.
References
1. Broy, M., Leuxner, C., Sitou, W., Spanfelner, B., Winter, S.: Formalizing the notion
of adaptive system behavior. In: ACM Symposium on Applied Computing (SAC),
pp. 1029–1033. ACM (2009)
Towards a Taxonomy of Autonomous Systems 45
2. Broy, M., Stølen, K.: Specification and Development of Interactive Systems-Focus on
Streams, Interfaces, and Refinement. Monographs in Computer Science, Springer,
New York (2001). https://doi.org/10.1007/978-1-4613-0091-5
3. Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1),
41–50 (2003)
4. de Lemos, R., et al.: Software engineering for self-adaptive systems: a second research
roadmap. In: de Lemos, R., Giese, H., M¨uller, H.A., Shaw, M. (eds.) Software Engi-
neering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 1–32. Springer, Heidel-
berg (2013). https://doi.org/10.1007/978-3-642-35813-5 1
5. Luck, M., d’Inverno, M.: A formal framework for agency and autonomy. In: First
International Conference on Multiagent Systems, pp. 254–260. The MIT Press
(1995)
6. Salehie, M., Tahvildari, L.: Self-adaptive software: landscape and research chal-
lenges. ACM Trans. Auton. Adapt. Syst. (TAAS) 4(2), 1–42 (2009)
7. Society of Automotive Engineers: Taxonomy and definitions for terms related to
driving automation systems for on-road motor vehicles, SAE j3016 (2018)
8. Weyns, D.: Software engineering of self-adaptive systems. In: Cha, S., Taylor, R.,
Kang, K. (eds.) Handbook of Software Engineering, pp. 399–443. Springer, Cham
(2019). https://doi.org/10.1007/978-3-030- 00262-6 11
9. Woods, E.: Software architecture in a changing world. IEEE Softw. 33(6), 94–97
(2016)
... Kugele et al. [22] propose a taxonomy that supports formal specifications of different levels of autonomous systems, accompanied by a high-level architecture for each level, which exemplifies the interaction between the system, the context in which the system operates and the end-user. To define different autonomy levels, the authors 1) put the system function (sf ) in the main focus, concretely by treating autonomy as a property of a single function, and 2) investigate the degree of interaction that an end-user has with the system (i. ...
... Autonomous System Fig. 2. From user-operation to autonomy, updated from [22]: in the figure on the left-hand side, a human user (u) controls the system (s), i. e., the system's function (sf ). Whereas, in the figure on the right, the control logic is divided between the user (u ) and the system (cl ). ...
Conference Paper
Full-text available
Establishing a better understanding of self-aware and self-adaptive systems is the first step towards specifying, modelling, designing and engineering these systems in the future. Although there might be some intuition behind the notions of awareness and adaptivity, there is a lack of clear definition and differentiation of these terms. In particular, the notion of awareness has been extensively studied in psychology and philosophy; however, a more rigorous understanding of this terminology is necessary for scientific debates in engineering. In this paper, by giving insights into how self-adaptive systems differ from the ordinary systems that are considered as non-adaptive, we set the foundation for understanding and differentiating self-awareness and self-adaptivity as two correlated but still different terms. We identify the system's self-awareness as a prerequisite for self-adaptivity and define two levels of awareness in computing systems.
... However, the reality falls short of the expectations, and advanced social bots do not exist today [15,17]. First, the level of autonomy in industrial software agents and robotic process automation (RPA) is minimal [18,19]. Agents often have architectures that are tightly coupled with specific service platforms. ...
Conference Paper
Full-text available
Software bots have attracted increasing interest and popularity in both research and society. Their contributions span automation, digital twins, game characters with conscious-like behavior, and social media. However, there is still a lack of intelligent bots that can adapt to the variability and dynamic nature of digital web environments. Unlike human users, they have difficulty understanding and exploiting the affordances across multiple virtual environments. Despite the hype, bots with human user-like cognition do not currently exist. Chatbots, for instance, lack situational awareness on the digital platforms where they operate, preventing them from enacting meaningful and autonomous intelligent behavior similar to human users. In this survey, we aim to explore the role of cognitive architectures in supporting efforts towards engineering software bots with advanced general intelligence. We discuss how cognitive architectures can contribute to creating intelligent software bots. Furthermore, we highlight key architectural recommendations for the future development of autonomous, user-like cognitive bots.
... The work of Machado et al. [15] focuses on the heavy-duty mobile machinery industry and presents a two-dimensional 6x6 matrix. While the work of Kugele et al. [16] presents a four-level taxonomy that provides a foundation for describing future systems, including robotic and drone taxi systems. ...
Article
Full-text available
The need to support complex human and machine collaboration has increased because of recent advances in the use of software and artificial intelligence approaches across various application domains. Building applications with more autonomy has grown dramatically as modern system development capability has significantly improved. However, understanding how to assign duties between humans and machines still needs improvement, and there is a need for better approaches to apportion these tasks. Current methods do not make adaptive automation easy, as task assignments during system operation need to take knowledge about the optimal level of automation (LOA) into account during the collaboration. There is currently a lack of explicit knowledge regarding the factors that influence the variability of human-system interaction and the correct LOA. Additionally, models have not been provided to represent the adaptive LOA variation based on these parameters and their interactions and interdependencies. The study, presented in this paper, based on an extensive literature review, identifies and classifies the factors that affect the degree of automation in autonomous systems. It also proposes a model based on feature diagrams representing the factors and their relationships with LOAs. With the support of two illustrative examples, we demonstrate how to apply these factors and how they relate to one another. This work advances research in the design of autonomous systems by offering an adaptive automation approach that can suggest levels of automation to facilitate human-computer interactions.
... Kuguke, Petrovska, and Gerostathopoulos [183] formalizes the taxonomy of autonomous systems by distinguishing between systems that learn and ones that do not learn. In this framework, intermittent and eventually autonomous levels are introduced in between nonautonomous and fully autonomous levels, comprising four levels of autonomy. ...
Article
Full-text available
Developmental autonomous behavior refers to the general ability of a machine to acquire new skills and behavior from its birth to maturity on its own without human intervention. This article describes the principles of behavior development in machines, providing a practical framework to analyze and synthesize machines with developmental capabilities. Inspired by biological views of behavioral causation, the work emphasizes principled explanations to, not only the “how” question on mechanisms but also the “why” question on causation of behavior development. This ethology-oriented perspective offers a renewed opportunity to construct a theoretical framework from the ground up, overcoming the age-old problems of intrinsic motivation and symbol emergence in autonomous machines. One of the key contributions of this article is the logical explanation of why and how value systems drive successive development of memory functions, resulting in progressive changes in behavior from innate reflexive to episodic, procedural, and autonomic behavior. Another notable contribution is the logical and plausible explanation of why and how a physical sensorimotor system evolves to manipulate its internal memory, fostering conceptual and social behavior development. This article provides an extensive review of prior research, followed by detailed descriptions of the causality and mechanisms of behavior development, and concludes with discussions on criticism, future work, ethics, and system architecture.
Article
Full-text available
Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-critical systems in which Machine Learning (ML) algorithms learn optimized and safe solutions. AI can also support and assist human safety engineers in developing safety-critical systems. However, reconciling both cutting-edge and state-of-the-art AI technology with safety engineering processes and safety standards is an open challenge that must be addressed before AI can be fully embraced in safety-critical systems. Many works already address this challenge, resulting in a vast and fragmented literature. Focusing on the industrial and transportation domains, this survey structures and analyzes challenges, techniques, and methods for developing AI-based safety-critical systems, from traditional functional safety systems to autonomous systems. AI trustworthiness spans several dimensions, such as engineering, ethics and legal, and this survey focuses on the safety engineering dimension.
Technical Report
Full-text available
Im Auftrag des Bundesministeriums für Wirtschaft und Klimaschutz haben DIN und DKE im Januar 2022 die Arbeiten an der zweiten Ausgabe der Deutschen Normungsroadmap Künstliche Intelligenz gestartet. In einem breiten Beteiligungsprozess und unter Mitwirkung von mehr als 570 Fachleuten aus Wirtschaft, Wissenschaft, öffentlicher Hand und Zivilgesellschaft wurde damit der strategische Fahrplan für die KI-Normung weiterentwickelt. Koordiniert und begleitet wurden diese Arbeiten von einer hochrangigen Koordinierungsgruppe für KI-Normung und -Konformität. Mit der Normungsroadmap wird eine Maßnahme der KI-Strategie der Bundesregierung umgesetzt und damit ein wesentlicher Beitrag zur „KI – Made in Germany“ geleistet. Die Normung ist Teil der KI-Strategie und ein strategisches Instrument zur Stärkung der Innovations- und Wettbewerbsfähigkeit der deutschen und europäischen Wirtschaft. Nicht zuletzt deshalb spielt sie im geplanten europäischen Rechtsrahmen für KI, dem Artificial Intelligence Act, eine besondere Rolle. Die vorliegende Normungsroadmap KI zeigt die Erfordernisse in der Normung auf, formuliert konkrete Empfehlungen und schafft so die Basis, um frühzeitig Normungsarbeiten auf nationaler, insbesondere aber auch auf europäischer und internationaler Ebene, anzustoßen. Damit zahlt sie maßgeblich auf den Artificial Intelligence Act der Europäischen Kommission ein und unterstützt dessen Umsetzung. Kapitel 1 der Roadmap führt in das Thema ein und stellt die wirtschaftspolitische Bedeutung der Normung sowie Ziele und Vorgehen der Roadmap dar. Das aktuelle Akteurs- und Normungsumfeld für KI ist in Kapitel 3 beschrieben. Dort wird eine Übersicht über relevante innovationspolitische Initiativen, Forschungsprojekte sowie Normungs- und Standardisierungsaktivitäten gegeben. Der Fokus der Normungsroadmap KI liegt auf neun Schwerpunktthemen, die in Kapitel 4 behandelt werden: → Den Ausgangspunkt bilden die Grundlagen wie beispielsweise Terminologien und Begriffsbestimmungen, Klassifizierungen und ethische Fragestellungen. Sie sind die Basis für Diskussionen rund um KI und damit zentraler Kern der Roadmap. → Für eine breite Nutzung von KI-Lösungen spielt die Sicherheit von KI-Systemen eine entscheidende Rolle. Nur eine tiefergehende Betrachtung von Anforderungen beispielsweise an die Betriebs- und Informationssicherheit kann einen umfassenden Einsatz von KI-Systemen in Wirtschaft und Gesellschaft ermöglichen. → Ein weiteres Schwerpunktthema und Grundlage für einen breiten Markterfolg von KI sind die Prüfung und Zertifizierung. Hierfür braucht es verlässliche Qualitätskriterien und reproduzierbare Prüfverfahren, mit denen sich die Eigenschaften von KI-Systemen überprüfen lassen. Sie sind eine Schlüsselvoraussetzung für die Bewertung der Qualität von KI-basierten Anwendungen und tragen maßgeblich zur Erklärbarkeit und Nachvollziehbarkeit bei – zwei Faktoren, die Vertrauen und Akzeptanz schaffen. → Eine weitere Herausforderung beim Einsatz von KI, insbesondere für kleinere und mittlere Unternehmen, stellt die Integration der KI-Technologien in Organisationen dar. Im Mittelpunkt stehen soziotechnische Aspekte wie die Mensch-Technik-Interaktion, die humane Arbeitsgestaltung sowie Anforderungen an Unternehmensstrukturen und -prozesse, die in der Roadmap untersucht werden. → Die Anwendungsgebiete von KI sind äußerst vielfältig. In nahezu allen Wirtschafts- und Anwendungsbereichen kommen KI-Technologien zum Einsatz und bieten großes Potenzial. Um ein breites Spektrum an Anwendungen abzudecken, werden in der Roadmap neben den oben genannten querschnittlichen Themen insbesondere auch branchenspezifische Herausforderungen für die folgenden fünf Sektoren betrachtet: Industrielle Automation, Mobilität, Medizin, Finanzdienstleistungen sowie Energie / Umwelt. Die vorliegende Roadmap skizziert für alle neun Schwerpunktthemen die Arbeits- und Diskussionsergebnisse und gibt einen umfassenden Überblick über Status quo, Anforderungen sowie Handlungsbedarfe. Mit mehr als 116 identifizierten Normungs- und Standardisierungsbedarfen zeigt die Roadmap konkrete Potenziale in allen Kernthemen auf und formuliert in Kapitel 2 sechs zentrale Handlungsempfehlungen: → Entwicklung, Validierung und Standardisierung eines horizontalen Konformitätsbewertungs- und Zertifizierungsprogramms für vertrauenswürdige KI-Systeme → Aufbau von Dateninfrastrukturen und Erarbeitung von Datenqualitätsstandards zur Entwicklung und Validierung von KI-Systemen → Betrachtung des Menschen als Teil des Systems in allen Phasen des KI-Lebenszyklus → Entwicklung von Vorgaben für die Konformitätsbewertung von kontinuierlich oder stufenweise lernenden Systemen im Bereich der Medizin → Entwicklung und Einsatz sicherer und vertrauenswürdiger KI-Anwendungen in der Mobilität durch Best Practices und Absicherung → Entwicklung übergreifender Datenstandards und dynamischer Modellierungsverfahren zur effizienten und nachhaltigen Gestaltung von KI-Systemen Die hohe Dynamik in der KI-Technologieentwicklung und der schnelle Anstieg der industriellen Anwendungen von KI-Systemen stellen auch neuartige Anforderungen an die Normungsprozesse und an die Bereitstellung und Weiterverwertung von Normeninhalten. Um diesen Herausforderungen zu begegnen, erarbeiten die Normungsorganisationen neue Ansätze, die in Kapitel 5 der Roadmap aufgeführt sind. Im Fokus stehen dabei die Überprüfung und Anpassung des Normenbestands, die Analyse von Standardisierungsbedarfen sowie die agile Entwicklung und bedarfsgerechte Bereitstellung von Normen und Standards. Die vorliegende Normungsroadmap gibt den Weg für die zukünftige Normung und Standardisierung im Bereich der Künstlichen Intelligenz vor. Bereits in Ausgabe 1 der Roadmap wurden erste Handlungsbedarfe identifiziert, von denen ein Großteil als Normungs- und Forschungsprojekte angestoßen oder umgesetzt werden konnte. Den aktuellen Stand der Umsetzungsaktivitäten der ersten Ausgabe beschreibt Kapitel 6. Die Veröffentlichung der zweiten Ausgabe der Normungsroadmap stellt den Startpunkt für die Umsetzung der Ergebnisse dar. Auch hier gilt es, Normungs- und Standardisierungsaktivitäten entlang der Handlungsempfehlungen auf den Weg zu bringen und mithilfe der entstehenden Normen und Standards die identifizierten Potenziale zu heben. Normen und Standards werden die deutsche Wirtschaft und Wissenschaft dabei unterstützen, innovationsfreundliche Bedingungen für die Technologie der Zukunft zu schaffen. Insbesondere zur gesellschaftspolitischen Debatte über die Rolle und den Einsatz von KI können die Ergebnisse der Roadmap einen wichtigen Beitrag leisten. Die Normung in Deutschland basiert auf der Mitarbeit fachkundiger Expert*innen aus Wirtschaft, Wissenschaft, öffentlicher Hand und Zivilgesellschaft. Nur ein frühzeitiges Engagement von KI-Fachleuten in den Normungsgremien wird es ermöglichen, deutsche Interessen in internationalen Standards einzubringen und damit einerseits marktgerechte Normen und Standards für KI zu erarbeiten und andererseits die Position Deutschlands als Wirtschaftsnation und Exportland zu stärken.
Article
Context: Modern cyber–physical systems (CPSs) are embedded in the physical world and intrinsically operate in a continuously changing and uncertain environment or operational context. To meet their business goals and preserve or even improve specific adaptation goals, besides the variety of run-time uncertainties and changes to which the CPSs are exposed—the systems need to self-adapt. Objective: The current literature in this domain still lacks a precise definition of what self-adaptive systems are and how they differ from those considered non-adaptive. Therefore, in order to answer how to engineer self-adaptive CPSs or self-adaptive systems in general, we first need to answer what is adaptivity, correspondingly self-adaptive systems. Method: In this paper, we first formally define the notion of adaptivity. Second, within the frame of the formal definitions, we propose a logical architecture for engineering decentralised self-adaptive CPSs that operate in dynamic, uncertain, and partially observable operational contexts. This logical architecture provides a structure and serves as a foundation for the implementation of a class of self-adaptive CPSs. Results: First, our results show that in order to answer if a system is adaptive, the right framing is necessary: the system’s adaptation goals, its context, and the time period in which the system is adaptive. Second, we discuss the benefits of our architecture by comparing it with the MAPE-K conceptual model. Conclusion: Commonly accepted definitions of adaptivity and self-adaptive systems are necessary for work in this domain to be compared and discussed since the same terms are often used with different semantics. Furthermore, in modern self-adaptive CPSs, which operate in dynamic and uncertain contexts, it is insufficient if the adaptation logic is specified during the system’s design, but instead, the adaptation logic itself needs to adapt and “learn” during run-time.
Article
Full-text available
As software systems have evolved, so has software architecture, with practices growing to meet each era's new challenges. The next phase of evolution--intelligent connected systems--promises to be an exciting time for software architects.
Chapter
Full-text available
The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.
Article
Full-text available
Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.
Article
Full-text available
Software's ability to adapt at run-time to changing user needs, system intrusions or faults, changing operational environment, and resource variability has been proposed as a means to cope with the complexity of today's software-intensive systems. Such self-adaptive systems can configure and reconfigure themselves, augment their functionality, continually optimize themselves, protect themselves, and recover themselves, while keeping most of their complexity hidden from the user and administrator. In this paper, we present research road map for software engineering of self-adaptive systems focusing on four views, which we identify as essential: requirements, modelling, engineering, and assurances. @InProceedings{cheng_et_al:DSP:2008:1500, author = {Betty H.C. Cheng and Holger Giese and Paola Inverardi and Jeff Magee and Rogerio de Lemos and Jesper Andersson and Basil Becker and Nelly Bencomo and Yuriy Brun and Bojan Cukic and Giovanna Di Marzo Serugendo and Schahram Dustdar and Anthony Finkelstein and Cristina Gacek and Kurt Geihs and Vincenzo Grassi and Gabor Karsai and Holger Kienle and Jeff Kramer and Marin Litoiu and Sam Malek and Raffaela Mirandola and Hausi M{"u}ller and Sooyong Park and Mary Shaw and Matthias Tichy and Massimo Tivoli and Danny Weyns and Jon Whittle}, title = {08031 -- Software Engineering for Self-Adaptive Systems: A Research Road Map}, booktitle = {Software Engineering for Self-Adaptive Systems}, year = {2008}, editor = {Betty H. C. Cheng and Rogerio de Lemos and Holger Giese and Paola Inverardi and Jeff Magee}, number = {08031}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1500}, annote = {Keywords: Software engineering, requirements engineering, modelling, evolution, assurances, self-adaptability, self-organization, self-management} }
Article
Full-text available
A 2001 IBM manifesto observed that a looming software complexity crisis -caused by applications and environments that number into the tens of millions of lines of code - threatened to halt progress in computing. The manifesto noted the almost impossible difficulty of managing current and planned computing systems, which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the Internet. Autonomic computing, perhaps the most attractive approach to solving this problem, creates systems that can manage themselves when given high-level objectives from administrators. Systems manage themselves according to an administrator's goals. New components integrate as effortlessly as a new cell establishes itself in the human body. These ideas are not science fiction, but elements of the grand challenge to create self-managing computing systems.
Article
Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how , what , when and where , towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.
Conference Paper
With the recent rapid growth of interest in MultiAgent Systems, both in artificial intelligence and software engineering, has come an associated difficulty concerning basic terms and concepts. In particular, the terms agency and autonomy are used with increasing frequency to denote different notions with different connotations. In this paper welay the foundations for a principled theory of agency and autonomy, and specify the relationship between them. Using the Z specification language, we describe a three-tiered hierarchy comprising objects, agents and autonomous agents where agents are viewed as objects with goals, and autonomous agents are agents with motivations.
Conference Paper
In computer science the notion of adaptive systems has been used in different contexts for many years now. Although there is some intuitive, common understanding of the notion, a precise and universally practicable definition is still missing. More precisely, previous definitions fail to strictly differentiate between adaptive and non-adaptive systems. We investigate the intuitions and propose a precise definition that comprises most informal explanations. Methodical implications arising from our definition are discussed as well.
A formal framework for agency and autonomy
  • M Luck
  • M Inverno