ChapterPDF Available

The Concept of [Friendliness] in Robotics: Ethical Challenges

Authors:

Abstract and Figures

Socially interactive robots differentiate from most other technologies in that they are embodied, autonomous, and mobile technologies capable of navigating, sensing, and interacting in social environments in a human-like way. By displaying behaviors that people identify as sentient such as showing to recognize people’s faces, making eye contact, and responding socially exhibiting emotions, robots create the illusion of interaction with a living being capable of affective reciprocity. The present paper discusses the ethical issues emerging from this context by analyzing the concept of [friendliness].
Content may be subject to copyright.
Metadata of the book that will be visualized in
SpringerLink
Publisher Name Springer International Publishing
Publisher Location Cham
Series ID 6259
SeriesTitle Intelligent Systems, Control and Automation: Science and Engineering
Book ID 462563_1_En
Book Title Robotics and Well-Being
Book DOI 10.1007/978-3-030-12524-0
Copyright Holder Name Springer Nature Switzerland AG
Copyright Year 2020
Corresponding Editor Family Name Aldinhas Ferreira
Particle
Given Name Maria Isabel
Suffix
Division Faculdade de Letras. Universidade de Lisboa
Organization Centro de Filosofia da Universidade de Lisboa
Address Lisbon, Portugal
Email isabelferreira@letras.ulisboa.pt
Corresponding Editor Family Name Silva Sequeira
Particle
Given Name João
Suffix
Division Institute for Systems and Robotics
Organization Instituto Superior Técnico
Address Lisbon, Portugal
Email joao.silva.sequeira@tecnico.ulisboa.pt
Corresponding Editor Family Name S. Virk
Particle
Given Name Gurvinder
Suffix
Division
Organization Innovative Technology and Science Ltd
Address Cambridge, UK
Email gurvinder.virk@innotecuk.com
Corresponding Editor Family Name Tokhi
Particle
Given Name Osman
Suffix
Division Dept of Electrical and Electronic Eng.
Organization London South Bank University
Address London, UK
Email tokhim@lsbu.ac.uk
Corresponding Editor Family Name E. Kadar
Particle
Given Name Endre
Suffix
Division Department of Psychology
Organization University of Portsmouth
Address Portsmouth, UK
Email kadar_e@yahoo.co.uk
U
NCORRECTED PROOF
1Intelligent Systems, Control and Automation:
2Science and Engineering
3Volume 95
4
5Series editor
6Professor S. G. Tzafestas, National Technical University of Athens, Greece
Editorial Advisory Board
Professor P. Antsaklis, University of Notre Dame, IN, USA
Professor P. Borne, Ecole Centrale de Lille, France
Professor R. Carelli, Universidad Nacional de San Juan, Argentina
Professor T. Fukuda, Nagoya University, Japan
Professor N. R. Gans, The University of Texas at Dallas, Richardson, TX, USA
Professor F. Harashima, University of Tokyo, Japan
Professor P. Martinet, Ecole Centrale de Nantes, France
Professor S. Monaco, University La Sapienza, Rome, Italy
Professor R. R. Negenborn, Delft University of Technology, The Netherlands
Professor A. M. Pascoal, Institute for Systems and Robotics, Lisbon, Portugal
Professor G. Schmidt, Technical University of Munich, Germany
Professor T. M. Sobh, University of Bridgeport, CT, USA
Professor C. Tzafestas, National Technical University of Athens, Greece
Professor K. Valavanis, University of Denver, Colorado, USA
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 1/16
Editor Proof
U
NCORRECTED PROOF
7More information about this series at http://www.springer.com/series/6259
8
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 2/16
Editor Proof
U
NCORRECTED PROOF
9Maria Isabel Aldinhas Ferreira
10 João Silva Sequeira Gurvinder S. Virk
11 Osman Tokhi Endre E. Kadar
12 Editors
13 Robotics and Well-Being
14
15
16
1818 123
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 3/16
Editor Proof
U
NCORRECTED PROOF
19 Editors
21 Maria Isabel Aldinhas Ferreira
22 Faculdade de Letras da Universidade
23 de Lisboa
24 Centro de Filosoa da Universidade
25 de Lisboa
26 Lisbon, Portugal
27 João Silva Sequeira
28 Institute for Systems and Robotics
29 Instituto Superior Técnico
30 Lisbon, Portugal
31 Gurvinder S. Virk
32 Innovative Technology and Science Ltd
33 Cambridge, UK
34
Osman Tokhi
35
Department of Electrical and Electronic
36
Engineering
37
London South Bank University
38
London, UK
39
Endre E. Kadar
40
Department of Psychology
41
University of Portsmouth
42
Portsmouth, UK
43
44
46
47
48 ISSN 2213-8986
49 ISSN 2213-8994 (electronic)
50 Intelligent Systems, Control and Automation: Science and Engineering
51 ISBN 978-3-030-12523-3
52 ISBN 978-3-030-12524-0 (eBook)
53 https://doi.org/10.1007/978-3-030-12524-0
54
55 Library of Congress Control Number: 2019931962
56
57 ©Springer Nature Switzerland AG 2020
58 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
59 of the material is concerned, specically the rights of translation, reprinting, reuse of illustrations,
60 recitation, broadcasting, reproduction on microlms or in any other physical way, and transmission
61 or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
62 methodology now known or hereafter developed.
63 The use of general descriptive names, registered names, trademarks, service marks, etc. in this
64 publication does not imply, even in the absence of a specic statement, that such names are exempt from
65 the relevant protective laws and regulations and therefore free for general use.
66 The publisher, the authors and the editors are safe to assume that the advice and information in this
67 book are believed to be true and accurate at the date of publication. Neither the publisher nor the
68 authors or the editors give a warranty, express or implied, with respect to the material contained herein or
69 for any errors or omissions that may have been made. The publisher remains neutral with regard to
70 jurisdictional claims in published maps and institutional afliations.
71 This Springer imprint is published by the registered company Springer Nature Switzerland AG
72 The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 4/16
Editor Proof
U
NCORRECTED PROOF
73 Preface
74 In the twenty-rst century, economists
1
have been claiming that the metrics for
75 assessing the state of development of societies should be dened not in terms
76 of their GDP,
2
but in terms of their citizensindividual well-being. The collective
77 state of well-being of a community must reect what the OECD
3
denes as the
78 quality of life of every single individual. This involves more than the simple access
79 to material resources, such as jobs, income, and wealth but comprehends physical
80 and mental health, emotional satisfaction, and self-realization in a harmonious and
81 sustainable environmental context. To achieve this, it is necessary that every
82 individual has equitable and fair access to education and lifelong learning in order
83 to develop and update the knowledge, skills, attitudes, and values that enable people
84 to contribute to and benet from an inclusive and sustainable future where tech-
85 nology, namely automation and articial intelligence, will play an important role.
86 This education, this lifelong learning, will allow not only for the development of all
87 types of literacy and skills necessary in the contemporary and future world, but
88 simultaneously for the denition of an ethical consciousness toward technology.
89 The raise of this collective and individual consciousness will be reected in the
90 attitudes and action of all stakeholders: researchers, industry, consumersallowing
91 for a real shift of paradigm where technological revolutions are not associated with
92 a signicative degree of suffering by many but really contribute to the well-being of
93 one and of all. This book aims to be a modest contribution to the emergence of that
94 consciousness.
1
Cf. Stiglitz et al. (2009) Measuring of Economic Performance and Social Development. Available
at: http://ec.europa.eu/eurostat/documents/118025/118123/Fitoussi+Commission+report Measuring
of Economic Performance and Social Development.
2
Gross domestic product.
3
For well over a decade, the OECD World Forums on Statistics, Knowledge, and Policy have been
pushing forward the boundaries of well-being measurement and policy. These Forums have
contributed signicantly to an ongoing shift in paradigm that identies peoples well-being and
inclusive growth as the ultimate goals in the denition of policies and collective action.
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 5/16
v
Editor Proof
U
NCORRECTED PROOF
95 In Chapter Technological Development and Well-Being, Maria Isabel
96 Aldinhas Ferreira points out that scientic and technological endeavors have always
97 been present throughout the developmental history of human kind, starting with the
98 most primitive and rudimentary forms of tool making to the present-day sophisti-
99 cation of intelligent autonomous systems. The digital revolution and the 4IT rev-
100 olution with all its galaxy of intelligent artifacts, networked virtual connections, and
101 hybrid environments are, in fact, the result of the accumulated experience and
102 knowledge of all precedent generations. Ferreira claims that the most profound
103 difference when comparing the present technological stage of development with the
104 previous ones is that of the ontological transformation of the concept of (tool).
105 Whereas, previously, tools were viewed as body extensions that human beings
106 somehow manipulated or just used, the present ones are striking distinct due to their
107 autonomy, their potential total independence relatively to human control. According
108 to the author, that duality present in all intelligent systems and the possible dis-
109 ruption these may cause in society calls for the emergence of a collective ethical
110 consciousness where human dignity and well-being play the central role.
111 In Chapter The IEEE Global Initiative on Ethics of Autonomous and Intelligent
112 Systems, Raja Chatila and John Havens present the mission, goals, and achieve-
113 ments of an initiative that was ofcially launched in April 2016 as a program of the
114 IEEE whose tagline is Advancing technology for humanity. The mission of
115 The IEEE Global Initiative is to ensure every stakeholder involved in the design and
116 development of autonomous and intelligent systems is educated, trained, and
117 empowered to prioritize ethical considerations so that these technologies are
118 advanced for the benet of humanity. As the authors point out, technologies have
119 been invented since time immemorial. They are not neutral. They have a purpose
120 and serve different objectives, good or bad. The reection on the ethical, social, and
121 legal consequences of A/IS has gained worldwide momentum on diverse questions
122 such as the impact on jobs and economy, the use of personal data, privacy, intru-
123 sion, surveillance, transparency, explicability of algorithmic decisions, account-
124 ability, and responsibility of autonomous/learned machine decisions. In some
125 applications where humanmachine interaction is using emotion detection and
126 expression, questions on cognitive and affective bonds with robots are raised, as
127 well as the moral impact of specic applications such as sexbots. In medical
128 application, the border between rehabilitation and augmentation of humans
129 becomes unclear. Anthropomorphism and android robots challenge human identity
130 and human dignity leading to reections on the status of robots in the human
131 society. Specic applications and usage, such as autonomous weapons systems, are
132 subject to debates in international organizations such as the United Nations.
133 However as Chatila et al. refer, despite these concerns there is much potential of
134 A/IS to increase individual and collective well-being. To fully benet from this
135 potential, it is needed to go beyond prioritizing exponential growth in developing
136 these applications and develop them in full respect of human values.
137 According to the authors, The IEEE Global Initiative provides the opportunity to
138 bring together multiple voices in the autonomous and intelligent systems commu-
139 nities to identify and nd consensus on the ethical, legal, and social issues related to
AQ1
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 6/16
vi Preface
Editor Proof
U
NCORRECTED PROOF
140 these systems. In December 2016, the IEEE Global Initiative produced Version 1 of
141 Ethically Aligned Design (EAD), a document identifying issues and providing
142 recommendations in key areas pertaining to A/IS. Version 2 of Ethically Aligned
143 Design featuring new sections and recommendations was released in December
144 2017, and the third version will be published in early 2019. In addition to Ethically
145 Aligned Design, fourteen standardization projects have been approved by the IEEE
146 Standards Association. The IEEE Global Initiative is also developing the Ethically
147 Aligned Design University Consortium (EADUC) which is set to launch in
148 February 2019. The focus of EADUC is on developing and promoting the issues,
149 recommendations, and themes outlined in Ethically Aligned Design along with the
150 A/IS Ethics-oriented curriculum already being taught by Member Universities.
151 In Chapter Humans and Robots: A New Social Order in Perspective?,JoãoS.
152 Sequeira discusses the eventual creation of a new social order with the progressive
153 introduction of robots in the social tissue. The author refers that as the number of
154 robots interacting with people grows, it seems natural that some adjustments occur
155 within societies. Though the extent of such adjustments is still unclear and
156 unpredictable, the current media frenzy on the effects of technology in societies,
157 with a special emphasis in social robotics, is driving research to account for
158 unexpected scenarios. The adjustments may include changes in the formations of
159 social hierarchies, in which humans must take orders from robots, naturally trig-
160 gering fears of dominance. The paper adopts a dynamic systems view of social
161 environments identifying stability with social order. Under relaxed assumptions,
162 societies can be represented by networks of non-smooth systems. The paper thesis
163 is that by integrating a robot in a social environment in small steps (under realistic
164 expectations), stability is preserved and hence also the social order. Disturbing
165 social hierarchies may indeed lead to a different equilibrium, that is, to a new social
166 order.
167 Vladimir Estivill-Castro, in Chapter Game Theory Formulation for Ethical
168 Decision Making, addresses the complexity of decision making in the context of
169 autonomous vehicles and discusses the contribution of Game Theorya mathe-
170 matical framework to study conict and cooperation between rational agents. This
171 interactive decision theory models situations under the formalism of a game.
172 Formally, a game consists of a set of participants named players, a set of strategies
173 (the choices) for each player, and a specication of payoffs (or utilities) for each
174 combination of strategies.
175 In Chapter Beyond the Doctrine of Double Effect: A Formal Model of True Self-
176 sacrice, Naveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh, and
177 Matthew Peveler present the doctrine of double effect (DDE)an ethical principle
178 that can account for human judgment in moral dilemmas: situations in which all
179 available options have large good and bad consequences. The DDE was previously
180 formalized in a computational logic that can be implemented in robots. DDE, as an
181 ethical principle for robots, is attractive for a number of reasons: (1) Empirical
182 studies have found that DDE is used by untrained humans; (2) many legal systems
183 use DDE; and nally, (3) the doctrine is a hybrid of the two major opposing families
184 of ethical theories (consequentialist/utilitarian theories vs. deontological theories).
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 7/16
Preface vii
Editor Proof
U
NCORRECTED PROOF
185 In spite of all its attractive features, the authors point out that DDE does not fully
186 account for human behavior in many ethically challenging situations. Specically,
187 standard DDE fails in situations wherein humans have the option of self-sacrice. In
188 this chapter, the authors present an enhancement of the DDE formalism to handle
189 self-sacrice.
190 Endre Kadar in Chapter Mind the Gap: A Theory Is Needed to Bridge the Gap
191 Between the Human Skills and Self-driving Carsclaims that in designing robots
192 for safe and ethically acceptable interaction with humans, one needs to understand
193 human behavior control including social interaction skills. A popular research area
194 for mixed control is to develop self-driving cars that are able to safely participate in
195 normal trafc. Vehicular control should be ethical, that is human-like to avoid
196 confusing pedestrians, passengers, or other human drivers. The present paper
197 provides insights into the difculties of designing autonomous and mixed vehicle
198 control by analyzing driversperformance in curve negotiation. To demonstrate the
199 discrepancy between human and automated control systems, biological and arti-
200 cial design principles are contrasted.
201 The paper concludes discussing the theoretical and ethical consequences of our
202 limited understanding of human performance by highlighting the gap between the
203 design principles of biological and articial/robotic performance.
204 Michael P. Musielewicz in Chapter Who Should You Sue When No-One Is
205 Behind the Wheel? Difculties in Establishing New Norms for Autonomous
206 Vehicles in the European Uniondiscusses the problem of liability within the
207 present regulatory framework for the European Union. The goal of this essay is to
208 provide a sketch of the problems related to liability and its legal framework as
209 found within the European Union and to examine a solution currently under
210 examination by ofcials in the EU, that is the possibility of legal personhood for
211 autonomous vehicles. The author rst concludes the current regulatory eld is
212 lacking, and then contrasts the advantages and disadvantages of such a scheme.
213 In Chapter Robotics, Big Data, Ethics and Data Protection: A Matter of
214 Approach, Nicola Fabiano points out that in Europe, the protection of personal
215 data is a fundamental right. Within this framework, the relationship among robotics,
216 articial intelligence (AI), machine learning (ML), data protection and privacy has
217 been receiving particular attention, recently, being the most important topics related
218 to data protection and privacy those of Big Data, Internet of Things (IoT), Liability
219 and Ethics. The present paper describes the main legal issues related to privacy and
220 data protection highlighting the relationship among Big Data, robotics, Ethics, and
221 data protection, trying to address the solution correctly through the European
222 General Data Protection Regulation (GDPR) principles.
223 Socially interactive robots differentiate from most other technologies in that they
224 are embodied, autonomous, and mobile technologies capable of navigating, sens-
225 ing, and interacting in social environments in a human-like way. By displaying
226 behaviors that people identify as sentient such as showing to recognize peoples
227 faces, making eye contact, and responding socially exhibiting emotions, robots
228 create the illusion of interaction with a living being capable of affective reciprocity.
229 In The Concept of Friendliness in Robotics: Ethical Challenges, Maria Isabel
AQ2
AQ3
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 8/16
viii Preface
Editor Proof
U
NCORRECTED PROOF
230 Aldinhas Ferreira discusses the ethical issues emerging from this context by ana-
231 lyzing the concept of (friendliness).
232 In Chapter Ethics, the Only Safeguard Against the Possible Negative Impacts of
233 Autonomous Robots?, Rodolphe Gélin considers the case of the companion
234 robots, focusing particularly on the case of the assistance to elderly people. The
235 author claims that even if it was possible to implement ethical judgment in a robotic
236 brain, it would probably not be a good solution as we cannot ask the robot to be
237 morally responsible of what it is doing. The question of the responsibility in case of
238 accident involving a robot is the subject of the third section of this paper.
239 In Chapter AI in the Sky: How People Morally Evaluate Human and Machine
240 Decisions in a Lethal Strike Dilemma, Bertram Malle, Stuti Thapa Magar,
241 Matthias Scheutz point out that even though morally competent articial agents
242 have yet to emerge in society, we need insights from empirical science to anticipate
243 how people will respond to such agents and explore how these responses should
244 inform agent design. Three survey studies presented participants with an articial
245 intelligence (AI) agent, an autonomous drone, or a human drone pilot facing a
246 moral dilemma in a military context: of launching a missile strike on a terrorist
247 compound but risking the life of a child, or canceling the strike to protect the child
248 but risking a terrorist attack. Seventy-two percent of respondents were comfortable
249 making moral judgments about the AI in this scenario and fty-one percent were
250 comfortable making moral judgments about the autonomous drone. These partici-
251 pants applied the same norms to the articial agents and the human drone pilot
252 (more than 80% said that the agent should launch the missile). However, people
253 ascribed different patterns of blame to humans and machines as a function of the
254 agents decision of how to solve the dilemma.
255 Chapter Putting People and Robots Together in Manufacturing: Are We
256 Ready?addresses the problem of human/robot collaboration in working contexts.
257 Sarah R. Fletcher, Teegan L. Johnson, and Jon Larreina point out that there is a
258 need to dene new ethical and safety standards for putting people and robots
259 together in manufacturing, but to do this we need empirical data to identify
260 requirements. This chapter provides a summary of the current state, explaining why
261 the success of augmenting humanrobot collaboration in manufacturing relies on
262 better consideration of human requirements, and describing current research work
263 in the European A4BLUE project to identify this knowledge. Initial ndings con-
264 rm that ethical and psychological requirements that may be crucial to industrial
265 humanrobot applications are not yet being addressed in safety standards or by the
266 manufacturing sector.
267 In Chapter A Survey on the Pain Threshold and Its Use in Robotics Safety
268 Standards, A. Mylaeus, A. Vempati, B. Tranter, R. Siegwart, and P. Beardsley
269 point out that traditional safety standards in robotics have emphasized separation
270 between humans and robots, but physical contact now becomes part of a robots
271 normal function. This motivates new requirements, beyond safety standards that
272 deal with the avoidance of contact and prevention of physical injury, to handle the
273 situation of expected contact combined with the avoidance of pain. This paper
274 reviews the physics and characteristics of humanrobot contact and summarizes a
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 9/16
Preface ix
Editor Proof
U
NCORRECTED PROOF
275 set of key references from the pain literature, relevant for the denition of robotics
276 safety standards.
277 In Chapter Lisbon Robotics Cluster: Vision and Goals, Pedro Lima, André
278 Martins, Ana S. Anbal, and Paulo S. Carvalho present The Lisbon Robotics Cluster
279 (LRC) which is an initiative of the Lisbon City Council to federate and present
280 under a common brand companies producing robot systems, end users (namely
281 public institutions), existing research centers from several higher education insti-
282 tutions in the Lisbon area and high schools. In addition to the new brand, the LRC
283 will be the starting point for the formal establishment of a network of strategic
284 partners, including the creation of an incubator of companies, a structure of support
285 and dynamization of the robotics cluster in the municipality, a living laboratory and
286 a network of hot spots throughout the cityspaces for testing and experimentation
287 of robotics equipment and products, e.g., marine robots, drones, and aerial robotics,
288 and mobility equipment, developed by research centers and companiesopen to
289 professionals and in some cases to the general public. The LRC intends to leverage
290 the research, development, and innovation in the Lisbon area, through attraction of
291 funding for projects and the identication of problems.
292 Lisbon, Portugal Maria Isabel Aldinhas Ferreira
293 July 2018 João Silva Sequeira
294 Gurvinder S. Virk
295 Endre E. Kadar
296 Osman Tokhi
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 10/16
x Preface
Editor Proof
U
NCORRECTED PROOF
297 Acknowledgements
298299 Lisbon University, Portugal, namely the Center of Philosophy and Instituto
300 Superior Técnico, was the host and organizing institution of the International
301 Conference on Robot Ethics and Safety Standards (ICRESS 2017), the event
302 that originated this volume.
303 The CLAWAR Association, UK, encouraged the realization of ICRESS 2017
304 and provided support at multiple levels.
305 Ciência Viva, Portugal, at Pavilhão do Conhecimento in Lisbon, provided the
306 fantastic venue for ICRESS 2017 and gave full logistics support.
307 The industry sponsors were Softbank Robotics, France, through Rodophe Gélin
308 and Petra Koudelkova-Delimoges, and idMInd, Portugal, through Paulo Alvito.
309 Their willingness and availability for debating fundamental topics from the
310 industry perspective were very important for the success of the event.
311 Signicado Lógico sponsored ICRESS 2017 by providing a splendid Web site.
312 Lisbon City Hall and Lisbon Tourism, Portugal, sponsored the social pro-
313 gramme of the event.
314
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 11/16
xi
Editor Proof
U
NCORRECTED PROOF
316 Contents
317 Technological Development and Well-Being ......................318 1
319 Maria Isabel Aldinhas Ferreira
320 The IEEE Global Initiative on Ethics of Autonomous
321 and Intelligent Systems ......................................322 11
323 Raja Chatila and John C. Havens
324 Humans and Robots: A New Social Order in Perspective? ..........325 17
326 João Silva Sequeira
327 Game Theory Formulation for Ethical Decision Making ............328 25
329 Vladimir Estivill-Castro
330 Beyond the Doctrine of Double Effect: A Formal Model
331 of True Self-sacrice .......................................332 39
333 Naveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh
334 and Matthew Peveler
335 Mind the Gap: A Theory Is Needed to Bridge the Gap
336 Between the Human Skills and Self-driving Cars ..................337 55
338 E. E. Kadar
339 Who Should You Sue When No-One Is Behind the Wheel?
340 Difculties in Establishing New Norms for Autonomous Vehicles
341 in the European Union ......................................342 67
343 Michael P. Musielewicz
344 Robotics, Big Data, Ethics and Data Protection:
345 A Matter of Approach ......................................346 79
347 Nicola Fabiano
348 The Concept of Friendliness in Robotics: Ethical Challenges .........349 89
350 Maria Isabel Aldinhas Ferreira
351
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 13/16
xiii
Editor Proof
U
NCORRECTED PROOF
352 Ethics, the Only Safeguard Against the Possible Negative Impacts
353 of Autonomous Robots? .....................................354 99
355 Rodolphe Gelin
356 AI in the Sky: How People Morally Evaluate Human and Machine
357 Decisions in a Lethal Strike Dilemma ...........................
358 111
359 Bertram F. Malle, Stuti Thapa Magar and Matthias Scheutz
360 Putting People and Robots Together in Manufacturing:
361 Are We Ready? ...........................................362 135
363 Sarah R. Fletcher, Teegan L. Johnson and Jon Larreina
364 A Survey on the Pain Threshold and Its Use in Robotics Safety
365 Standards ................................................366 149
367 A. Mylaeus, A. Vempati, B. Tranter, R. Siegwart and P. Beardsley
368 Lisbon Robotics Cluster: Vision and Goals ......................
369 157
370 Pedro U. Lima, AndréMartins, Ana S. Aníbal and Paulo S. Carvalho
371 Index ......................................................372 169
xiv Contents
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 14/16
Editor Proof
U
NCORRECTED PROOF
373 Contributors
374 Maria Isabel Aldinhas Ferreira Faculdade de Letras da Universidade de Lisboa,
375 Centro de Filosoa da Universidade de Lisboa, Lisbon, Portugal;
376 Instituto Superior Técnico, Institute for Systems and Robotics, University of
377 Lisbon, Lisbon, Portugal
378 Ana S. Aníbal Economy and Innovation Department, Lisbon City Council,
379 Lisbon, Portugal
380 P. Beardsley Disney Research Zurich, Zürich, Switzerland
381 Selmer Bringsjord RAIR Lab, Department of Cognitive Science, Rensselaer
382 Polytechnic Institute, New York, USA;
383 RAIR Lab, Department of Computer Science, Rensselaer Polytechnic Institute,
384 New York, USA
385 Paulo S. Carvalho Economy and Innovation Department, Lisbon City Council,
386 Lisbon, Portugal
387 Raja Chatila Institute of Intelligent Systems and Robotics, Sorbonne Universite,
388 Paris, France
389 Vladimir Estivill-Castro School of Information and Communication Technology,
390 Grifth University, Brisbane, QLD, Australia
391 Nicola Fabiano Studio Legale Fabiano, Rome, Italy
392 Sarah R. Fletcher Craneld University, Craneld, UK
393 Rodolphe Gelin Innovation, SoftBank Robotics Europe, Paris, France
394 Rikhiya Ghosh RAIR Lab, Department of Computer Science, Rensselaer
395 Polytechnic Institute, New York, USA
396 Naveen Sundar Govindarajulu RAIR Lab, Department of Cognitive Science,
397 Rensselaer Polytechnic Institute, New York, USA
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 15/16
xv
Editor Proof
U
NCORRECTED PROOF
398 John C. Havens Institute of Intelligent Systems and Robotics, Sorbonne
399 Universite, Paris, France
400 Teegan L. Johnson Craneld University, Craneld, UK
401 E. E. Kadar Department of Psychology, University of Portsmouth, Portsmouth,
402 UK
403 Jon Larreina IK4-Tekniker, Eibar, Spain
404 Pedro U. Lima Institute for Systems and Robotics, Instituto Superior Técnico,
405 University of Lisbon, Lisbon, Portugal
406 Stuti Thapa Magar Department of Psychological Sciences, Purdue University,
407 West Lafayette, IN, USA
408 Bertram F. Malle Department of Cognitive, Linguistic and Psychological
409 Sciences, Brown University, Providence, RI, USA
410 AndréMartins Economy and Innovation Department, Lisbon City Council,
411 Lisbon, Portugal
412 Michael P. Musielewicz John Paul II Catholic University of Lublin, Lublin,
413 Poland
414 A. Mylaeus Autonomous Systems Lab, ETH Switzerland, Zürich, Switzerland
415 Matthew Peveler RAIR Lab, Department of Computer Science, Rensselaer
416 Polytechnic Institute, New York, USA
417 Matthias Scheutz Department of Computer Science, Tufts University Halligan
418 Hall, Medford, MA, USA
419 João Silva Sequeira Instituto Superior Técnico, Institute for Systems and
420 Robotics, University of Lisbon, Lisbon, Portugal
421 R. Siegwart Autonomous Systems Lab, ETH Switzerland, Zürich, Switzerland
422 B. Tranter BSI Consumer and Public Interest Unit UK, London, UK
423 A. Vempati Autonomous Systems Lab, ETH Zurich, Zürich, Switzerland;
424 Disney Research Zurich, Zürich, Switzerland
Layout: T1 Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: FM 1 Date: 23-2-2019 Time: 11:34 am Page: 16/16
xvi Contributors
Editor Proof
U
NCORRECTED PROOF
425 Author Query Form
426 Book ID : 462563_1_En
427 Chapter No : 6259
428
429
430 Please ensure you fill out your response to the queries raised
431 below and return this form along with your corrections.
432 Dear Author,
433 During the process of typesetting your chapter, the following queries have
434 arisen. Please check your typeset proof carefully against the queries listed
435 below and mark the necessary changes either directly on the proof/online
436 grid or in the Authors responsearea provided below
438 Query Refs.
439 Details Required
440 Authors Response
441
442
443 AQ1
444 Please suggest whether the phrase concept of (tool)can be retained as such.
445
446 AQ2
447 We found discrepancy in the usage of the author name Endre E. Kadar,
448 Endre Kadar, E. E. Kadar, please confirm how to follow throughout the
449 book.
450
451 AQ3
452 The term Liabilityhas been given inconsistently regarding capitalization
453 throughout the front matter. Please suggest which one is to be followed.
454
455
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Technological Development and Well-Being
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Aldinhas Ferreira
Particle
Given Name Maria Isabel
Prefix
Suffix
Role
Division
Organization Faculdade de Letras da Universidade de Lisboa, Centro de Filosofia da
Universidade de Lisboa
Address Lisbon, Portugal
Division Instituto Superior Técnico, Institute for Systems and Robotics
Organization University of Lisbon
Address Lisbon, Portugal
Email isabel.ferreira@letras.ulisboa.pt
Abstract The way progress—conducted on the basis of extracting the maximum economic and financial profit—is
menacing humanity and the very planet has led to (i) a deeper awareness that all scientific and
technological development somehow impact on human physical and social environment (ii) that this
development has frequently come with a very high existential cost (iii) that an ethical reflection prior to
any effective technological deployment is not only advisable but it is certainly a priority. The complexity
of 4IT revolution imposes the adoption of a shift of paradigm from a production-oriented measurement
system to one focused on the well-being of the present and future generations.
Keywords
(separated by '-')
4IT revolution - Ethics - Metrics for development - Well-being
UNCORRECTED PROOF
Technological Development
and Well-Being
Maria Isabel Aldinhas Ferreira
Abstract The way progress—conducted on the basis of extracting the maximum
1
economic and financial profit—is menacing humanity and the very planet has led to2
(i) a deeper awareness that all scientific and technological development somehow3
impact on human physical and social environment (ii) that this development has4
frequently come with a very high existential cost (iii) that an ethical reflection prior5
to any effective technological deployment is not only advisable but it is certainly a6
priority. The complexity of 4IT revolution imposes the adoption of a shift of paradigm7
from a production-oriented measurement system to one focused on the well-being8
of the present and future generations.9
Keywords 4IT revolution ·Ethics ·Metrics for development ·Well-being10
1 Introduction11
Scientific and technological endeavor has always been present throughout the devel-12
opmental history of humankind, starting with the most primitive and rudimentary13
forms of tool making to the present-day sophistication of intelligent autonomous14
systems. The capacity to act on the surrounding environment, conceiving and devel-15
oping tools capable of facilitating human tasks, improving life conditions, eradicating16
poverty and disease, providing a response toward what is considered an eventual men-17
ace or assuring defense against possible threats, is a human endowment. Actualized18
by different civilizational frameworks and different social tissues, in a permanent19
dialectics with the social and cultural contexts it has emerged from, this capacity has20
M. I. Aldinhas Ferreira (B)
Faculdade de Letras da Universidade de Lisboa, Centro de Filosofia da Universidade de Lisboa,
Lisbon, Portugal
e-mail: isabel.ferreira@letras.ulisboa.pt
M. I. Aldinhas Ferreira
Instituto Superior Técnico, Institute for Systems and Robotics,
University of Lisbon, Lisbon, Portugal
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_1
1
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
2 M. I. Aldinhas Ferreira
evolved exponentially throughout time. The huge technological development some21
contemporary societies experience, the digital revolution and the 4IT revolution with22
all its galaxy of intelligent artifacts, networked virtual connections and hybrid envi-23
ronments is the result of the accumulated experience and knowledge of all precedent24
generations.25
Though the driver responsible for triggering out scientific and technological devel-26
opment throughout times has, ultimately, always been the aim to provide the con-27
ditions for the better living of communities1and though all domains of human life28
have undoubtedly benefited immensely from most scientific and technological devel-29
opment throughout the ages, one cannot help noticing that, even taking apart the30
destruction and suffering caused by the technological development associated with31
warfare, frequently that development has caused massive negative impacts that were32
not, in general, even anticipated. We can easily recall the negative impacts on society33
caused by the past industrial revolution and the present accumulated negative impacts34
brought by technological development that led to the global warming and climate35
change phenomena, natural environment destruction and negative consequences on36
humans and other animal species.37
That destruction trail and the evidence that faces us all of how “progress” con-38
ducted on the basis of extracting the maximum economic and financial profit is39
menacing not only the existence of some species but the very planet, has led to (i) a40
deeper awareness that all scientific and technological development somehow impacts41
on human physical and social environment (ii) that this development has frequently42
come with a very high existential cost (iii) that a reflection prior to any effective43
technological deployment is not only advisable but it is certainly needed so that all44
stakeholders involved in the process—governance, research and development, indus-45
try, end-users may create, develop, implement and use technology in a responsible46
way, so that it can achieve its ultimate goal: promoting well-being, enhancing a bet-47
ter life. This well-being is not exclusive of humans but necessarily extends to nature48
and all the other species as human beings and their environments constitute a single49
existential microcosm bound by an essential dialectical relationship [5].50
2 The Relevance of the Concept of Well-Being51
The concept of Gross Domestic Product, created in 1937 by Kuznetz [10], has been52
used as a measure of raw economic activity and presented, for long, as a primary indi-53
cator of the economic health of a country, as well as a gauge of a country’s standard54
of living. After World War II, the GDP became synonymous with the broader welfare55
and progress of society leading to economic policies targeting to the maximization56
of its growth rate disregarding any social and environmental costs. However, the very57
Kuznets had pointed out that economic progress would not be possible without social58
progress and as Landefeld and Villones referred years later [11] his concern about59
1Even in the case of warfare.
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Technological Development and Well-Being 3
the exclusion of a broader set of activities from the GDP statistics echoed over the60
ages, notably in Robert F. Kennedy’s eloquent critique [9]:61
Too much and too long, we seem to have surrendered community excellence and community62
values in the mere accumulation of material things. Our gross national product, if we should63
judge America by that, counts air pollution and cigarette advertising, and ambulances to64
clear our highways o carnage. It counts special locks for our doors ... Yet the gross national65
product does not allow for the health of our children, the quality of their education, or the66
joy of their play ... it measures everything, in short, except that which makes life worthwhile.67
And it tells us everything about America except why we are proud that we are Americans.68
During the two last decades, the awareness that macroeconomic statistics, such as69
GDP, do not provide policy-makers with a sufficiently detailed picture of the living70
conditions that ordinary people experience, has increased exponentially. Though71
that fact was already evident during the years of strong growth and “good” economic72
performance that characterized the early part of the 2000s, the financial and economic73
crisis of the last decade amplified this intuition, as indicators like GDP could not show74
all the social costs of the crisis.75
Fabrice Murtin, Senior Economist, Household Statistics and Progress Measure-76
ment Division of the OECD Statistics Directorate, refers [12] “the GDP is a good77
and necessary measure of economic activity, but it is a very poor measure of people’s78
well-being.” Murtin pointed out that there are two fundamental reasons for this: The79
GDP does not reflect the diversity of household situations in a country as there is no80
inequality component embedded into it.81
In 2007, 19–20 November, the European Commission hosted The “Beyond GDP82
Conference” where over six hundred and fifty attendees recognized that the primary83
metric for the world measured growth and income but it did not incorporate factors84
like the environment or the individual’s physical, mental and emotional health in its85
calculations. As Hans-Gert Pöttering, then noted “well-being is not just growth; it is86
also health, environment, spirit, and culture.”87
In February 2008, the President of the French Republic, Nicholas Sarkozy, asked,88
Joseph Stiglitz to create a Commission, subsequently called “The Commission on89
the Measurement of Economic Performance and Social Progress” (CMEPSP). The90
Commission’s aim was (i) to identify the limits of GDP as an indicator of economic91
performance and social progress, including the problems with its measurement; (ii) to92
consider what additional information might be required for the production of more93
relevant indicators of social progress; (iii) to assess the feasibility of alternative94
measurement tools; (iv) to discuss how to present the statistical information in an95
appropriate way. The Stiglitz report [20] concluded that an increase in GDP does96
not directly correlate to an increase in citizens well-being concluding that “The time97
is ripe for our measurement system to shift emphasis from measuring economic98
production to measuring people’s well-being.”99
Advocating a shift of paradigm from a “production oriented” measurement system100
to one focused on the well-being of the present and future generations it points out101
that in order to define [well-being] a multidimensional definition has to be used. The102
report identifies the key dimensions that should be considered simultaneously:103
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
4 M. I. Aldinhas Ferreira
i. Material living standards (income, consumption and wealth);104
ii. Health;105
iii. Education;106
iv. Personal activities including work;107
v. Political voice and governance;108
vi. Social connections and relationships;109
vii. Environment (present and future conditions);110
viii. Insecurity, of an economic as well as a physical nature.111
The OECD Framework for Measuring Well-Being and Progress [14] is based112
essentially on the recommendations made in 2009 by the Commission on the Mea-113
surement of Economic Performance and Social Progress. This framework is built114
around three distinct domains: quality of life, material conditions and sustainabil-115
ity of well-being over time. Each of these domains includes a number of relevant116
dimensions (Fig. 1).117
Fig. 1 The OECD framework for measuring well-being
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Technological Development and Well-Being 5
When we analyze the concept of [well-being], in general terms, we verify that118
the key dimensions identified by Stiglitz cover fundamental areas. It is a fact that the119
well-being of any life form must correspond to the iterative or hopefully continuous120
satisfaction of the basic needs dictated by its internal states along an existential time121
line. Common to all life forms and primordial will be those dimensions concerning122
its fitness and the existence of a species suitable environment where it can assure123
basic needs such feeding and guarantee the species replication. In human beings, the124
concept of [well-being] attains a higher complexity that derives from the complexity125
of human cognition.2
126
As in other species prior to all other dimensions will be those relative to the127
individual’s global fitness and the existence of a favorable environment which in this128
case not only provides an answer to basic needs but also the proper setting for the129
development of their humanity. A deeper analysis of the concept allows recognizing130
that its constitution according to Stiglitz’ key dimensions probably present a degree131
of variability throughout space/time, i.e., the relevance of these dimensions depends132
on the cultural/civilizational frameworks individuals belong to, on their life contexts133
and also on the nature of their own subjectivity. These variants play a fundamental134
role not only in the identification of key dimensions but also in their ordering, i.e.,135
the priority assigned to them by each individual.136
While the well-being of each individual can be described in terms of a number137
of separate outcomes, the assessment of conditions for society as a whole requires138
aggregating these outcomes for broader communities and considering both popula-139
tion averages and inequalities, based on the preferences and value judgments of each140
community.141
The OECD Better Life Initiative [13] aims to develop statistics that can capture142
aspects of life that matter to people and that, taken together, help to shape the quality143
of their lives.144
Published every two years, it provides a comprehensive picture of well-being145
in OECD countries and other major economies, by looking at people’s material146
conditions and quality of life in eleven dimensions: income and wealth; jobs and147
earnings; housing conditions; health status; work-life balance; education and skills;148
social connections; civic engagement and governance.149
The Better Life Index [16] was designed to involve individuals in the discussion150
on well-being and, through this process, to learn what matters most to them, the151
Index has attracted over eight million visits from just about every country on the152
planet and has received over 17 million page views.153
This interactive web-based tool enables citizens to compare well-being across154
countries by giving their own weight to each of the eleven dimensions. The web155
application allows users to see how countries’ average achievements compare, based156
on the user’s own personal priorities in life, and enables users to share their index and157
choices of weights with other people in their networks, as well as with the OECD. The158
2Cognition is here understood as the capacity of any natural or artificial system to autonomously
interact with the environment it is embedded in [6,7].
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
6 M. I. Aldinhas Ferreira
Index allows you to compare well-being across countries, based on the aforemen-159
tioned 11 topics. Each flower representing a country and each petal a topic (Fig.2).160
The graphic in Fig. 3reflects the opinion of Portuguese citizens in what concerns161
their well-being in 2016, according to the dimensions identified in the chart. The162
longer lines spreading from the center illustrate areas of strength while the shorter163
are viewed as weaker. As shown below, for the Portuguese users of the Better Life164
Index, “life satisfaction, “health” and “safety” are the three most important factors165
responsible for well-being.166
Fig. 2 Measuring well-being and progress across countries [15]
Fig. 3 Measuring well-being in Portugal, 2016 [17]
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Technological Development and Well-Being 7
3 The 4IR and Well-Being167
3.1 When Tools Become Autonomous Entities168
Embodied and non-embodied forms of artificial cognition have been progressively169
introduced in many domains of human life, in the last decade, determining new170
behavioral patterns, fostering new habits, new routines, new life perspectives.171
Contemporary societies are becoming more and more hybrid environments. This172
means environments where the physical is permeated by the digital, where human173
interaction is mediated by advanced forms of communication, where non-embodied174
and very soon also embodied forms of artificial intelligence coexist with natural175
intelligence, where human labor, in multiple contexts and domains, is being replaced176
by task performing by artificial autonomous systems.177
Artificial intelligence has already touched over 70% of population on earth and178
automation is spreading in industry with around 70% of industrial robots currently179
at work in the automotive, electrical/electronics and metal and machinery industry180
segments. The global sales of industrial robots reaching a new record of 387,000181
units in 2017 which means an increase of 31% compared to the previous year [8].182
Robots and humans are starting to cooperate learning to share the lived space with183
each others, both at work and at home where robots will progressively perform more184
and more complex tasks without supervision.185
The most profound difference when comparing the present technological stage of186
development with the previous ones is that of the ontological transformation of the187
concept of [tool]. In fact typically all tools were somehow viewed as body extensions,188
i.e., entities human beings manipulated or just used in order to create something,189
in order to act on the surrounding environment to produce a qualitative change,190
frequently ameliorating the hardness of work, contributing this way to a better life.191
Robots and non-embodied intelligent entities, as chat bots or the algorithms that are192
already running in some domains of life, are human tools. But what is the feature193
that makes these tools so different from all the previous ones? The answer to this194
is—their autonomy, their potential independence relatively to human control. They195
are objects as all tools typically were, but simultaneously, as they are endowed with196
a capacity for agency, they are also subjects.197
3.2 Toward a Collective Ethical Consciousness198
That duality present in all intelligent systems and the possible disruption it may cause199
in society calls for the emergence of a collective ethical consciousness where human200
dignity and well-being play the central role. This collective ethical consciousness is201
already being fostered by governance, through legislative frameworks and recom-202
mendations [1,3,4] by research and industry through the discussion and adoption203
of ethical guidelines in the design of those systems and by their compliance to204
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
8 M. I. Aldinhas Ferreira
previously established standards [21] and also by educational guidelines [19], and205
initiatives targeting the potential present and future end-users [2].206
Education has a vital role to play in the shaping of this consciousness of the207
impacts, limits and boundaries of embodied and non-embodied intelligent systems208
and this debate will allow refreshing some fundamental values of our humanist209
tradition: the unquestionable respect for human dignity at every stage of life, in every210
social context; the essential role played by love and family ties; the inalienable right211
to work participating in the building of society; the right to truth and transparency,212
equity and fairness, inclusiveness and individuation.213
We agree with [18] when he says that AI will allow us to look at ourselves in the214
mirror. In fact, we are called to have this kind of bird-eye’s view, a detached look at215
ourselves at our society and how it looks. This will allow for the construction of a216
fairer society where technological development will play a significant role promoting217
well-being and contributing to the construction of a better and more sustainable world.218
References219
1. European Commission (2018) https://ec.europa.eu/info/law/law-topic/data-protection_en220
2. European Commission (2018) https://ec.europa.eu/education/sites/education/files/factsheet-221
digital-education- action-plan.pdf222
3. European Commission (2018) Artificial intelligence for Europe. 25 April223
4. European Political Strategy Centre (201) The age of artificial intelligence: towards a European224
strategy for human-centric machines. EPSC Strategic Notes, 27 March225
5. Ferreira M (2010) On meaning: a biosemiotic approach. Biosemiotics 3(1):107–130. https://226
doi.org/10.1007/s12304- 009-9068-y ISSN: 1875-1342227
6. Ferreira M, Caldas M (2013a) Modelling artificial cognition in biosemiotic terms. Biosemiotics228
6(2):245–252229
7. Ferreira M, Caldas M (2013b) The concept of Umwelt overlap and its application to multi-230
autonomous systems. Biosemiotics 6(3):497–514. https://doi.org/10.1007/ s12304-013- 9185-231
5232
8. IFR (2017) https://ifr.org/ifr-press-releases/news/industrial-robot- sales-increase-worldwide-233
by-29- percent. Accessed July234
9. Kennedy R (2010) Address, University of Kansas, Lawrence, Kansas, March 18, 1968. Avail-235
able at https://www.bea.gov/scb/pdf/2010/ 04%20April/0410_gpd- beyond.pdf236
10. Kuznetz S (1955) Economic growth and income inequality. Am. Econ. Rev. 45(1):1–282.237
Ava ilab l e http://gabriel-zucman.eu/files/teaching/Kuznets55.pdf238
11. Landefeld J, Moulton B, Platt J, Villones S (2010) GDP and beyond: measuring economic239
progress and sustainability. Available at https://www.bea.gov/scb/pdf/ 2010/04%20April/240
0410_gpd-beyond.pdf
241
12. Murtin F (2017) Civillaw rules on robotics: prioritizing human well-being in the age of artificial242
intelligence. 11 April (Brussels)243
13. OECD (2011) Better life initiative: measuring well-being and progress. Available at www.244
oecd.org/betterlifeinitiative245
14. OECD (2011) The OECD framework for measuring well-being and progress246
15. OECD (2013) How’s life?: measuring well-being. https://doi.org/10.1787/9789264201392- en247
16. OECD (2018) www.oecdbetterlifeindex.org248
17. OECD (2018) http://www.oecd.org/statistics/better- life-initiative.htm249
18. Penn J (2018) https://jonniepenn.com/250
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Technological Development and Well-Being 9
19. Scheleicher A (2018) How to build a 21st century school system
251
20. Stiglitz J, Sen A, Fitoussi J (2009) Report by the commission on the measurement of economic252
performance and social progress. Available at http://ec.europa.eu/eurostat/ documents/118025/253
118123/Fitoussi+Commission+report254
21. The IEEE Global Initiative (2018) Ethically aligned design I and II. The IEEE global ini-255
tiative for ethical considerations in artificial intelligence and autonomous systems. https://256
ethicsinaction.ieee.org/257
462563_1_En_1_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 9Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Chatila
Particle
Given Name Raja
Prefix
Suffix
Role
Division Institute of Intelligent Systems and Robotics
Organization Sorbonne Universite
Address 75005, Paris, France
Email Raja.Chatila@sorbonne-universite.fr
Author Family Name Havens
Particle
Given Name John C.
Prefix
Suffix
Role
Division Institute of Intelligent Systems and Robotics
Organization Sorbonne Universite
Address 75005, Paris, France
Email
Abstract The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) is a program of the
IEEE initiated to address ethical issues raised by the development and dissemination of these systems. It
identified over one hundred and twenty key issues and provided candidate recommendations to address
them. In addition, it has provided the inspiration for fourteen approved standardization projects that are
currently under development with the IEEE Standards Association.
Keywords
(separated by '-')
Ethics - Autonomous systems - Intelligent systems - Value-based design - Standards
UNCORRECTED PROOF
The IEEE Global Initiative on Ethics
of Autonomous and Intelligent Systems
Raja Chatila and John C. Havens
Abstract The IEEE Global Initiative on Ethics of Autonomous and Intelligent
1
Systems (A/IS) is a program of the IEEE initiated to address ethical issues raised by2
the development and dissemination of these systems. It identified over one hundred3
and twenty key issues and provided candidate recommendations to address them.4
In addition, it has provided the inspiration for fourteen approved standardization5
projects that are currently under development with the IEEE Standards Association.6
Keywords Ethics ·Autonomous systems ·Intelligent systems ·Value-based7
design ·Standards8
1 Introduction9
1.1 Historical Background and Context10
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (here-11
inafter: “The Global Initiative”) was officially launched in April 2016 as a program12
of the IEEE, the world’s largest technical professional organization with more than13
420,000 members. IEEE is involved in all technical areas pertaining to computer14
science, electronics, and electrical engineering. Its tagline is “Advancing technology15
for humanity.”16
Technologies have been invented since time immemorial. They are not neutral.17
They have a purpose and servedifferent objectives, good or bad. Artificial Intelligence18
and Robotics are sixty-year-old technologies, but they became subject to unprece-19
dented attention less than a decade ago. Robotics and AI have indeed achieved20
considerable progress in the past few years, enabled by the exponential increase21
of computing power, availability of memory, miniaturization of sensors, actuators,22
energy sources, and by connectivity through the Internet. This allowed for massive23
R. Chatila (B)·J. Havens
Institute of Intelligent Systems and Robotics, Sorbonne Universite, 75005 Paris, France
e-mail: Raja.Chatila@sorbonne-universite.fr
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_2
11
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
12 R. Chatila and J. C. Havens
quantities of data (text, images, and sound about any subject, music, etc.) acquired by24
a multiplicity of devices to become easily available. This fueled statistical machine25
learning techniques invented in the 1980s and the 1990s, and enabled them to show26
their full efficiency. New research directions were followed from these advances,27
and the rise of autonomous and intelligent systems, mostly but not only, based on28
these techniques has fostered innovative applications in numerous industry sectors,29
developed not only by large companies but by smaller firms and organizations as30
well, and provoked an explosion of new start-up companies.31
The pervasiveness of these technologies has brought new perspectives to the way32
autonomous and intelligent systems (A/IS) are perceived by professionals, by policy33
makers, by the media, and by the general public. Artificial Intelligence is frequently34
credited by non-professionals, and by some professionals as well, with unlimited35
capacities, creating both admiration and fear of a general superintelligence. And36
“autonomy” is often misunderstood as the ability of the system to make decisions37
on its own will, to the point that it could become out of (human) control. This38
confusion is amplified by the fact that A/IS often result from learning methods like39
deep learning where processes of how an algorithm achieved certain results are40
opaque. While developers may understandably need to protect Intellectual Property41
or may not be able to fully describe all aspects of a deep learning process, the lack of42
transparency around A/IS development nonetheless increases a lack of understanding43
for the general public and amplifies fear and distrust.44
1.2 Ethics of Autonomous and Intelligent Systems45
The success of learning technologies has created a strong economic incentive to46
develop and sell new systems and services. The explosion of market penetration in47
many sectors, such as health, insurance, transportation, military applications, enter-48
tainment, and diverse services, has, however, raised several questions around own-49
ership of data, privacy protection, trustworthiness of autonomous and intelligent50
systems, or bias in machine learning.51
This reflection on the ethical, social, and legal consequences of A/IS has gained52
worldwide momentum on diverse questions such as the impact on jobs and economy,53
the use of personal data, privacy, intrusion, surveillance, transparency, explicability54
of algorithmic decisions, accountability, and responsibility of autonomous/learned55
machine decisions. In some applications where human–machine interaction is using56
emotion detection and expression, questions on cognitive and affective bonds with57
robots are raised, as well as the moral impact of specific applications such as58
sexbots. In medical application, the border between rehabilitation and augmenta-59
tion of humans becomes unclear. Anthropomorphism and android robots challenge60
human identity and human dignity leading to reflections on the status of robots in the61
human society. Specific applications and usage, such as autonomous weapons sys-62
tems, are subject to debates in international organizations such as the United Nations.63
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The IEEE Global Initiative on Ethics of Autonomous 13
However, despite these concerns, there is much potential of A/IS to increase64
individual and collective well-being. To fully benefit from this potential, we need65
to go beyond prioritizing exponential growth in developing these applications and66
develop them in full respect of human values.67
2 The IEEE Global Initiative on Ethics of Autonomous68
and Intelligent Systems69
The mission of the IEEE Global Initiative is to ensure every stakeholder involved70
in the design and development of autonomous and intelligent systems is educated,71
trained, and empowered to prioritize ethical considerations so that these technologies72
are advanced for the benefit of humanity.73
The IEEE Global Initiative provides the opportunity to bring together multiple74
voices in the autonomous and intelligent systems communities to identify and find75
consensus on the ethical, legal, and social issues related to these systems. From April76
2016 to December 2017, it mobilized over 250 members from around the world and77
it contributes to a broader effort at IEEE that fosters open, broad, and inclusive78
conversation about ethics in technology known as the IEEE TechEthicsTM program.79
In December 2016, the IEEE Global Initiative produced Version 1 of Ethically80
Aligned Design (EAD), a document identifying issues and providing recommenda-81
tions in key areas pertaining to A/IS. Version 2 of Ethically Aligned Design featuring82
new sections and recommendations was released in December 2017, and the third83
version will be published in early 2019. In addition to Ethically Aligned Design,84
fourteen standardization projects have been approved by the IEEE Standards Asso-85
ciation. The IEEE Global Initiative is also developing the Ethically Aligned Design86
University Consortium (EADUC) which is set to launch in February 2019. The focus87
of EADUC is on developing and promoting the issues, recommendations, and themes88
outlined in Ethically Aligned Design along with the A/IS ethics-oriented curriculum89
already being taught by member universities.90
2.1 Ethically Aligned Design91
The publicly available EAD document is organized into thirteen sections correspond-92
ing to the thirteen committees that drafted it. The drafting process is designed to gain93
consensus within each group. In addition to the thirteen sections, a glossary provides94
common definitions of the main concepts in EAD and the A/IS space at large. Each95
section of EAD is organized into “Issues” which are topics raising ethical, legal, or96
societal questions and “Candidate Recommendations” to address them. Resources97
are provided for the interested reader.98
The thirteen sections of EAD are:99
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
14 R. Chatila and J. C. Havens
1. General Principles100
2. Embedding Values Into Autonomous Intelligent Systems101
3. Methodologies to Guide Ethical Research and Design102
4. Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial103
Superintelligence (ASI)104
5. Personal Data and Individual Access Control105
6. Reframing Autonomous Weapons Systems106
7. Economics/Humanitarian Issues107
8. Law108
9. Affective Computing109
10. Classical Ethics in Information & Communication Technologies110
11. Policy111
12. Mixed Reality112
13. Wellbeing113
The founding values of EAD’s work are developed in the first section on “General114
Principles.” These principles in Ethically Aligned Design, Version 2, are:115
Human Rights: Ensure A/IS do not infringe on internationally recognized human116
rights117
Well-being: Prioritize metrics of well-being in A/IS design and use118
Accountability: Ensure that designers and operators of A/IS are responsible and119
accountable120
Transparency: Ensure A/IS operate in a transparent and explainable manner121
Extending benefits and minimizing risks of A/IS of misuse: Minimize the risks122
of A/IS misuse mainly through information and education that sensitizes society,123
government, lawmakers, media, etc.124
We refer the reader to the EAD document for the contents of all thirteen sections.125
2.2 Standards126
Many discussions revolve around the necessity to regulate A/IS development, with a127
classical opposition between an “against” camp on the ground that regulation would128
hinder innovation and a “pro” camp on the ground that regulation is necessary to frame129
new products in accordance with the common good. We will deliberately avoid this130
discussion here. The approach adopted by the IEEE Global Initiative was to propose131
standardization projects, which, if adopted by industry after their development as132
approved standards, would enable these organizations to easily comply with the133
ethically aligned requirements and guidelines they provide. The fourteen working134
groups of the approved standardization projects of the so-called P7000 series that135
stemmed from the IEEE Global Initiative are all currently in development and open136
for any new members who would like to join. The following are examples of these137
standardization projects:138
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The IEEE Global Initiative on Ethics of Autonomous 15
P7000: Model Process for Addressing Ethical Concerns During System Design.139
This standard will establish a process model by which engineers and technologists140
can address ethical consideration throughout the various stages of system initiation,141
analysis, and design. Expected process requirements include management and engi-142
neering view of new IT product development, computer ethics and IT system design,143
value-sensitive design, and, stakeholder involvement in ethical IT system design.144
P7001: Transparency of Autonomous Systems.145
A key concern over autonomous systems is that their operation must be transparent146
to a wide range of stakeholders, for different reasons. For users, transparency is147
important because it builds trust in the system, by providing a simple way for the148
user to understand what the system is doing and why. For validation and certification149
of an autonomous system, transparency is important because it exposes the system’s150
processes for scrutiny.151
P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous152
Systems.153
This standard will establish a practical and technical baseline of specific method-154
ologies and tools for the development, implementation, and use of effective fail-155
safe mechanisms in autonomous and semi-autonomous systems. The standard will156
include (but is not limited to): clear procedures for measuring, testing, and certifying157
a system’s ability to fail safely on a scale from weak to strong, and instructions for158
improvement in the case of unsatisfactory performance. The standard will serve as the159
basis for developers, as well as users and regulators, to design fail-safe mechanisms160
in a robust, transparent, and accountable manner.161
P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and162
Autonomous Systems.163
The Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous164
Systems will enable programmers, engineers, and technologists to better consider165
how the products and services they create can increase human well-being based on166
a wider spectrum of measures than growth and productivity alone. Today, affec-167
tive systems utilizing emotion recognizing sensors are quantified primarily by their168
economic value in the marketplace beyond their efficacy within certain fields (psy-169
chology, etc.). While it is often understood that ethical considerations for intelligent170
and autonomous systems might hinder innovation by the introduction of unwanted171
regulation, without metrics that value mental and emotional health at both an indi-172
vidual and societal level, innovation is impossible to quantify. The introduction and173
use of these metrics for programmers and technologists mean that beyond economic174
increase human well-being can be measured and better improved.175
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
16 R. Chatila and J. C. Havens
3 Conclusion176
By standardizing the creation of A/IS with ethically aligned human values and soci-177
ety, we can knowingly increase human and ecological well-being as our metric for178
progress in the algorithmic age. This requires raising awareness about misuse of179
Autonomous and Intelligent Systems and embedding values in the operation of these180
systems by the people who create and use them. To achieve this purpose, Ethically181
Aligned Design and the standards under development are a strong tool for researchers,182
designers, and engineers to follow for a responsible research and design approach,183
which would guarantee developing these technologies for the benefit of humanity.184
Acknowledgements The authors wish to acknowledge the members of the EAD committees,185
standards working groups, and glossary drafting group whose work is summarized in this paper.186
462563_1_En_2_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 16 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Humans and Robots: A New Social Order in Perspective?
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Sequeira
Particle
Given Name João Silva
Prefix
Suffix
Role
Division Instituto Superior Técnico, Institute for Systems and Robotics
Organization University of Lisbon
Address Lisbon, Portugal
Email joao.silva.sequeira@tecnico.ulisboa.pt
Abstract As the number of robots interacting with people grows, it seems natural that some adjustments occur
within societies. Though the extent of such adjustments is unclear, the current media frenzy on the effects
of technology in societies, with a special emphasis in social robotics, is driving research to account for
unexpected scenarios. The adjustments may include changes in the formations of social hierarchies, in
which humans must take orders from robots, naturally triggering fears of dominance and convergence to
societies operating under new Ethics. The paper adopts a dynamic systems view of social environments
identifying stability with social order. The introduction of robots in social environments is likely to change
some equilibrium that can be identified with social order. Under relaxed assumptions societies can be
represented by networks of non-smooth systems. The paper thesis is that by integrating a robot in a social
environment in small steps (under realistic expectations) stability is preserved and hence also the social
order. Disturbing social hierarchies may indeed lead to a different equilibrium, that is, to a new social
order.
Keywords
(separated by '-')
Social robotics - Social order - Dynamic systems - Non-smooth systems
UNCORRECTED PROOF
Humans and Robots: A New Social
Order in Perspective?
João Silva Sequeira
Abstract As the number of robots interacting with people grows, it seems natural
1
that some adjustments occur within societies. Though the extent of such adjustments2
is unclear, the current media frenzy on the effects of technology in societies, with3
a special emphasis in social robotics, is driving research to account for unexpected4
scenarios. The adjustments may include changes in the formations of social hier-5
archies, in which humans must take orders from robots, naturally triggering fears6
of dominance and convergence to societies operating under new Ethics. The paper7
adopts a dynamic systems view of social environments identifying stability with8
social order. The introduction of robots in social environments is likely to change9
some equilibrium that can be identified with social order. Under relaxed assumptions10
societies can be represented by networks of non-smooth systems. The paper thesis11
is that by integrating a robot in a social environment in small steps (under realistic12
expectations) stability is preserved and hence also the social order. Disturbing social13
hierarchies may indeed lead to a different equilibrium, that is, to a new social order.14
Keywords Social robotics ·Social order ·Dynamic systems ·Non-smooth15
systems16
1 Introduction17
From the nineteenth century and the Industrial Revolution onward, the effect of18
new technologies in society has been highly visible. A more recent example is the19
explosion of the internet and related technologies that is forcing societies to review20
their organizations. Current social robotics is starting to impact the societies to an21
extent that it is not yet clear.22
J. S. Sequeira (B)
Instituto Superior Técnico, Institute for Systems and Robotics,
University of Lisbon, Lisbon, Portugal
e-mail: joao.silva.sequeira@tecnico.ulisboa.pt
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_3
17
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
18 J. S. Sequeira
As the robotics technology progresses, namely toward social robotics, robots are23
interacting with people and disturbing structured organizations. Two forces are in24
contention here, (i) the technological limitations and (ii) the visionary ideas on the25
use of technologies. Technology is often not what people expect and often people26
ignore the potential of a given technology. So far, societies have been able to always27
adapt themselves. In what concerns social robotics, both forces are tightly related to28
the quality of the interactions between humans and robots.29
Quoting Schneider [16], “Social order is the order that is imposed on a person’s30
action (Parsons, 1949).”, p. 37. In a sense, it is the (social) hierarchy “controlling”31
the social environment. Managerial practices in companies already adjust hierarchies32
as a function of workers skills and quality human-robot interaction (HRI) is being33
considered a potential factor to increase the quality of work. Machines do the op-34
erational part of the work and workers are left maintenance and surveillance tasks35
[12].36
Roboticists are not supposed to destabilize social environments (in a negative37
sense) with their creations. However, field experiments have shown that this may38
indeed happen [13]. Some authors claim that a systematic research approach to39
integrating robots in social environment is still missing [12].40
Also, misconceptions may result from inadvertently biased experiments. For in-41
stance [17] refer that “robots have to be sensitive to humans emotions ... to be socially42
accepted.” In the context of the MOnarCH project, the experiments report that the43
acceptance of a social robot is clear though the robot is not aware of any emotions44
from the people interacting with it [7].45
Non-smooth dynamic systems have the power to represent a wide range of un-46
certainties, thus having a good potential to model social environments. Moreover,47
properties such as stability can be identified with social order in dynamic systems48
an unstable system evolves to some undesirable condition, in general a process for49
which no convergence is attained. In social systems, instability can be identified50
also with undesirable conditions, also represented by non-convergent performance51
indicators, e.g., of social agitation.52
2 Social Robots and Social Order53
Nowadays there are multiple examples of automated systems taking decisions that54
are accepted by people, e.g., aircraft autopilots, and traffic and subway management55
systems, and the global society is rapidly converging to accommodate autonomous56
vehicles. The introduction of such systems, which in a sense are (not social) robots57
was achieved smoothly, even if some adjustments were required, e.g., in the man-58
agement of aircraft cockpits by the crews, and in the current efforts to create areas59
where autonomous vehicles can circulate. The perception of increased safety and60
quality of life may have contributed to their success (independently of being right or61
wrong).62
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Humans and Robots: A New Social Order in Perspective? 19
In automated systems, decisions tend to be close to action, that is, a decision is63
likely to have an immediate impact in the physical world and hence it is easy for64
humans to acquire a direct perception of the effects. This means that humans are65
always in control and no disturbances in the social order are likely to occur.66
In what concerns social robots, there is an implicit analogy with humans, namely67
in the natural uncertainty/unreliability associated with human decision-making. That68
is, a social robot is likely to have some intrinsic randomness, which contributes to69
its acceptance (see for instance [8] on the relevance of responsiveness for quality70
HRI, or [11] on the effect in social environment models of liveliness features based71
on randomly selected actions).72
Currently typical social robots are designed to convey a perception of some73
intention. They are not designed as part of a hierarchical chain of commands. A74
non-exhaustive selection of robots originating from relevant R&D projects in social75
robotics shows robots to interact with different classes of people (see Table 1). Most,76
if not all, of these robots target acceptance by individuals and do not have concerns77
regarding maintaining any form of social order. It is implicitly assumed that being78
accepted by a significant number of individuals is enough to ensure a smooth integra-79
tion of a social robot. None of these social robots issues explicit authoritarian orders80
to people. Personal assistance and service robots may suggest actions to people but81
none of them will issue an explicit command/order to a human. In a sense, robots82
are socially passive (and therefore socially controllable). It is interesting to observe83
that terms such as “suggest,”“encourage, “complement each other, are being used84
together with expressions such as “significant value gained through human-robot85
interactions” by R&D trend makers (see for instance [19]).86
Research on robots that “say no” to people suggest that humans should not87
worry about disobedient machines [4]. People controlling them are the real prob-88
lem. Though the ability to “say no” may be necessary in a social robot, namely to89
comply with Asimov’s laws, it may be also necessary to deal with humans that “say90
no” to an order issued by a robot, and this may have some impact in the social order.91
A simple example of a possible application of an authoritarian social robot could92
be that of a butler in charge of controlling the use of energetic resources at home93
(possibly among other less or no authoritarian tasks). Children that leave lights on,94
keep the air conditioning always on, or spend too much water in the shower, could95
face abrupt switch off of lights, air conditioning, or water. In such example, wrong96
decisions could always be reverted by parents. In this case, the social hierarchy is97
well defined.98
Most likely there will be situations in which a social robot is temporarily not99
controlled, though people may think it is (as it happens in humans only social envi-100
ronments). This could be already a common situation, for instance, when the control101
is exerted through channels which may have significant latencies at sparse times,102
such as the Internet. Therefore, it is necessary to consider situations in which a robot103
may be an agent with real control of a social hierarchy, without a safeguard to bypass104
it in the social hierarchy.105
The effect of communication delays in human behavior is well known. See for106
instance [9] related to human-computer interaction and [14] for the effects in cogni-107
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
20 J. S. Sequeira
Table 1 Selection of R&D projects in social robotics
Project/robot acronym Application End users
MOnarCH Edutainment for inpatient children in an oncological hospital Children
NAO Humanoid robot of child size and full anthropomorphic features Misc
Pepper Humanoid robot, with anthropomorphic features, for generic people assistance Misc
PARO Seal cub robot with basic interaction capabilities and no locomotion Elderly
Maggie Human-robot interaction research, with anthropomorphic features Misc
Aliz-E Artificial Intelligence for small social robots that interact with children using the NAO robot Children
LIREC Building long-term relationships with artificial companions Misc
Cogniron Development of cognitive robots to serve as companions to humans Misc
HRIAA Robot with social abilities and personality and emotions, using verbal and non-verbal and para-verbal
communication (uses a NAO robot)
Misc
SQUIRREL Human-robot interaction in a cluttered scene Children
STRANDS Long-term trials of intelligent mobile robots in dynamic human environments. Understanding
spatio-temporal structure of environment in different time scales
Misc
CompanionAble Personal assistant, for remote monitoring and aide memory services Elderly
KSERA Remote health monitoring robot Elderly
HOBBIT Personal assistant robot for domestic use, with anthropomorphic features including manipulation Elderly
SocialRobot Personal assistant, with anthropomorphic features Elderly
ROBOT-ERA Personal assistant with anthropomorphic features Elderly
GrowMeUp Personal assistant using cloud computing and machine learning techniques. It is able to learn people needs
in order to establish positive long-term relationships
Elderly
ENRICHME Personal assistant for long-term monitoring and interaction Elderly
Mario Personal assistant to address loneliness, isolation and dementia effects Elderly
CARESSES Robot able to autonomously re-configure their way of acting and speaking to match customs and etiquette
of the person it is assisting
Elderly
Kuri Personal assistant with anthropomorphic features and lovable personality to play with children Children
DOMEO Personal assistant for domestic use Elderly
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Humans and Robots: A New Social Order in Perspective? 21
tion of a video feedback delay. A robot that has an order issued to a human delayed108
because of some poor connection to a higher level decision-making (e.g., as when109
such decision-making is remote and the network connection is subject to failures),110
may be perceived as not being capable of handling a situation, similarly to what111
would happen to a human in the same role. Strategies to minimize such situations112
may include local decision-making, which may be perceived by humans as the poor113
decision-making skills by the robot. However, such delays may also convey the op-114
posite perception, similarly to what happen with some humans (the filmographic115
Chauncey Gardiner character is a caricature example [1]).116
Robots in a position of authority may, naturally, make a person feel vulnerable117
or uncomfortable, much as humans in the same role often do. Still [15] report lab118
experiments in which volunteers followed the instructions of a robot even when it119
proved itself untrustworthy. Persuasiveness may depend on the embodiment [3]to120
an extent that people may indeed follow authority figures that contradict their moral121
principles [6]. However, as recognized in [5], there may be significant differences122
between lab and real environments in human-only scenarios.123
3 Dynamic Systems and Social Robots124
Social structure models for humans are inherently complex. In a sense, a social order125
is a hierarchy in the flow of information among groups of entities forming a society.126
This is compatible with the definition quoted in [16]. In addition, the hierarchy127
possibly embeds multiple feedback links from lower to upper levels.128
Often, computational social models (see for instance [18], p. 4, for a definition of129
computational social model) will in general capture only partial views of the environ-130
ments as a result of limitations that may exist in the mathematical representations. In131
fact, modeling physical and biological phenomena may be subject to uncertainties,132
but modeling sociocultural phenomena addresses also intrinsic subjective aspects.133
Landau [10] reports probabilistic models able to capture hierarchical phenomena134
occurring in societies with dominance relationships. The intrinsic uncertainty of135
social phenomena is captured by the probabilistic framework.136
Alternative frameworks able to capture a wide range of uncertainties include non-137
smooth dynamic systems, namely those in the class of differential inclusions, which,138
roughly state that a rate of evolution does not have to follow strictly some function.139
Instead, it is required to be contained in some set which can be made to account for140
bounded uncertainties. This is the type of systems motivating this research.141
Assume that each of the levels of a hierarchy can be modeled by a non-smooth142
system of the type above. Furthermore, assume that such systems can represent143
typical human actions. In a sense, this means that some weak properties on continuity,144
namely semi-continuity, can be reasonably assumed to hold. The rationale is that a145
social level is composed by people whose activities are bound to physical laws and146
hence, by construction, some form of continuity of relevant variables must hold.147
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
22 J. S. Sequeira
Fig. 1 An abstract view of a
network organized to form
hierarchies
Figure 1represents a collection of systems representing, for example, the activity148
of the entities grouped in each hierarchy level. Each level is modeled by a differential149
inclusion where the uistand for the independent variables that feed each level with150
information from other levels, the qistand for the dependent variables, and the Fi
151
represent a set-valued map defined after the environment and objectives of the i-th152
level of the hierarchy. More complex models can be used, namely hybrid models,153
including both continuous and discrete state variables and the inclusion mathematical154
framework.155
A sufficient condition for stability is that any possibility of a variation of a rate156
of an indicator function that represents a measure of stability is upper bounded (see157
for instance [2], Chap. 6) by a monotonic function. This indicator function must be158
a meaningful function, i.e., a Lyapunov function that captures the evolution of the159
dynamics of the system. If the indicator function converges to a constant value then160
the system is stable.161
The above notion of stability, where some indicator function converges to a con-162
stant value, suggests that if an equilibrium is perturbed by a small amount, that keeps163
the indicator function below the upper bounding function, then stability is preserved164
(roughly, the Lyapunov function is preserved). In the context of social robotics and165
social hierarchies this can be identified with scenarios in which new features/skills166
are incrementally added to the robot and do not change significantly an indicator167
function and hence the social order (and stability) is preserved, i.e., small changes168
preserve stability and hence social order.169
Disruptive changes may be accounted by significant changes in the Fimaps that170
may result, for example, from the introduction of a new layer in the hierarchy as171
when introducing a social robot in the decision hierarchy. This may lead to changes172
in the semi-continuity properties of the global set-valued map which are required for173
the necessary and sufficiency results on the existence of a Lyapunov function (see174
Proposition 6.3.2 and Theorem 6.3.1 in [2]).175
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Humans and Robots: A New Social Order in Perspective? 23
4 Conclusion: A New Social Order? So What?176
Consumer technologies are known to yield multiple forms of addiction. As new177
devices/tools are made available to general public, it is clear that society is changing.178
An obvious example is the massive usage of smartphones and the way they are179
conditioning people, often in elusive forms, for instance, to frequently check for180
news and messages from social networks. This conditioning, often ignored, can be181
identified as being generated by a new type of social agent, not human, of distributed182
and vague nature.183
Also, a personal assistance robot that is only used to encourage a patient to take184
medications is no different than having a collection of displays spread around home185
that issue in some graceful manner adequate commands. The fact that these are static186
may simply imply a decrease in the perception of authority.187
In practical terms, humans are already taking orders from such complex devices,188
the behaviors of which may not be easy to anticipate. Nevertheless, it appears that the189
smooth introduction of complex devices is not disturbing social stability, as people190
is accepting suggestions from these robots. The non-smooth dynamic systems view191
outlined in the paper confirms what is already visible in practical terms, that is,192
avoiding disturbances in the social order can be done by smoothly integrating social193
robots. Social nudging is likely to be an interesting tool to achieve this smooth194
integration.195
The extent to which social robots can be integrated in human societies before social196
order starts to change is unclear. However, using a strict dynamic systems perspective,197
if the social hierarchy model is known, then it can be controlled, possibly to new198
social orders, to the advantage of humans.199
References200
1. Ashby H (1979) Being There. Film directed by Hal Ashby, screenplay by Jerzy Kosinski,201
Robert C. Jones202
2. Aubin J, Cellina A (1984) Differential inclusions. Springer, Berlin203
3. Bartneck C et al (2010) The influence of robot anthropomorphism on the feelings of embar-204
rassment when interacting with robots. Paladyn J Behav Robot 1, 2:109–115205
4. Briggs G, Scheutz M (2017) Why robots must learn to tell us "No". Scientific American206
5. Burger J (2009) Replicating Milgram: would people still obey today? Am Psychol 64(1):1–11207
6. Cormier D, Newman G, Nakane M, Young J, Durocher S (2013) Placing robots in positions208
of authority. A human-robot interaction obedience study. University of Manitoba, Canada,209
Technical report210
7. Ferreira I, Sequeira J (2015) Assessing human robot interaction: the role of long-run experi-211
ments. In: Proceedings of the 18th International Conference on Climbing and Walking Robots212
and the Support Technologies for Mobile Machines (CLAWAR’15), Hangzhou, China, 6–9213
September214
8. Hoffman G, Birnbaum G, Vanunu K, Sass O, Reis H (2014) Robot responsiveness to human215
disclosure affects social impression and appeal. In: Proceedings of the 9th ACM/IEEE Inter-216
national Conference on Human-Robot Interaction (HRI’14), March 3–6, Bielefeld, Germany217
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
24 J. S. Sequeira
9. Kohrs C, Angenstein N, Brechmann A (2016) Delays in human-computer interaction and their
218
effects on brain activity. PLoS ONE 11(1)219
10. Landau H (1968) Models of social structure. Bull Math Biophys 30(2):215–224220
11. Lima C, Sequeira J (2017) Social Environment modeling from Kinect data in robotics ap-221
plications. In: Proceedings of the International Conference on Computer Human Interaction222
Research and Applications (CHIRA 2017), Funchal, Madeira, Portugal, October 31–November223
2224
12. Moniz A, Krings B (2016) Robots working with humans or humans working with robots?225
searching for social dimensions in new human-robot interaction in industry. Societies 6(23)226
13. Mutlu B, Forlizzi J (2008) Robots in organizations: the role of workflow, social, and environ-227
mental factors in human-robot interaction. In: Proceedings of the 3rd ACM/IEEE International228
Conference on Human-Robot Interaction (HRI’08), March 12–15, Amsterdam, The Nether-229
lands230
14. Powers S, Rauth C, Henning R, Buck R, West T (2011) The effect of video feedback delay on231
frustration and emotion communication accuracy. Comput Hum Behav 27:1651–1657232
15. Robinette P, Li W, Allen R, Howard A, Wagner A (2016) Overtrust of robots in emergency233
evacuation scenarios. In: Proceedings of the 11th ACM/IEEE International Conference on234
Human-Robot Interaction (HRI’16), Christchurch, New Zealand, 7–10 March235
16. Schneider D (2004) The relevance of models for social anthropology. Rutledge (1965, reprinted)236
17. Toumi T, Zidani A (2017) From human-computer interaction to human-robot social interaction.237
https://arxiv.org/ftp/arxiv/papers/1412/1412.1251.pdf, [Online August 2017]238
18. Turnley J, Perls A (2008) What is a computational social model anyway?: A Discussion of239
Definitions, a consideration of challenges, and an explication of process. Technical report,240
Defense Threat Reduction Agency, Advanced Systems and Concepts Office, USA, Report241
Number ASCO 2008-013242
19. Wegenmakers R (2016) Social robots. KPMG Management Consulting243
462563_1_En_3_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 24 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Game Theory Formulation for Ethical Decision Making
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Estivill-Castro
Particle
Given Name Vladimir
Prefix
Suffix
Role
Division School of Information and Communication Technology
Organization Griffith University
Address Brisbane, QLD, 4111, Australia
Email v.estivill-castro@griffith.edu.au
Abstract The inclusion of autonomous robots among everyday human environments has suggested that these robots
will be facing ethical decisions regarding trade-offs where machines will choose some human attributes
over the attributes of other humans. We argue in this paper that on a regular instance, algorithms for such
decisions should not only be deterministic but instead, the decision will be better framed as an optimal
mixed strategy in the sense of Nash equilibria in game theory.
Keywords
(separated by '-')
Ethical dilemma - Decision making - Game theory - Mixed strategies - Autonomous vehicles
UNCORRECTED PROOF
Game Theory Formulation for Ethical
Decision Making
Vladimir Estivill-Castro
Abstract The inclusion of autonomous robots among everyday human environments
1
has suggested that these robots will be facing ethical decisions regarding trade-offs2
where machines will choose some human attributes over the attributes of other hu-3
mans. We argue in this paper that on a regular instance, algorithms for such decisions4
should not only be deterministic but instead, the decision will be better framed as an5
optimal mixed strategy in the sense of Nash equilibria in game theory.6
Keywords Ethical dilemma ·Decision making ·Game theory ·Mixed strategies ·7
Autonomous vehicles8
1 Introduction9
Moore [11] suggested that driverless cars should be programmed to cause the least10
harm possible when facing the choice between pedestrians and passengers as they11
face an unavoidable damaging situation. Others have suggested that robots should be12
programmed to anticipate harmful situations for human beings and take direct and13
immediate actions to protect or avoid such harm [20]. Hall [8] examined the issue14
with depth from the fundamental perspectives that contrast the ethical behavior of15
humans, governments, and machines. Hall’s analysis invites to investigate what is16
meant by “least harm possible”. Moreover, since it seems clear that machines will17
be having emergent behavior (beyond what their designers could foresee) [8], we18
also need to ask the question how is one to implement such decision-making process19
in the fundamental model of computation of current software/hardware arguably20
equivalent to a society of Turing machines.21
Some studies have identified “less harm possible” as the precise balance between22
a number of lives [2]. This objective makes the utility of the decision transparent23
V. Estivill-Castro (B)
School of Information and Communication Technology, Griffith University,
Brisbane, QLD 4111, Australia
e-mail: v.estivill-castro@griffith.edu.au
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_4
25
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
26 V. Estivill-Castro
and quantifiable, resulting in what has been named utilitarian vehicles. A car is24
utilitarian if it always makes the decision that takes the least number of lives. The25
typical potential scenario for such utilitarian cars is the choice between the lives26
of several pedestrians (by staying on course) and the life of the single passenger27
by swerving into a wall. Figure 1presents another scenario and illustrates a sample28
question to survey participants for their agreement with such utilitarian decision29
making. The scenario contrasts an autonomous car that has two choices only. Choice30
one follows Jeremy Bentham utilitarianism (the arithmetic of the number of lives).31
In the scenario of Fig. 1, the first choice is to sacrifice one bystander (and our own32
survey confirms what is being established with similar surveys [2]; namely most33
participants of the survey suggest the first sacrificing of the bystander is precisely the34
least harm). The second choice follows Immanuel Kant’s duty-bound principles [16].35
In the later, the car has an obligation not to kill. Since the bystander is innocent, the36
car should not take any action to kill some human explicitly, so it shall continue37
course and sacrifice pedestrians.38
This scenario raises even economic challenges to manufacturers. The public would39
demand the transparency of the algorithm that makes such decisions. It is argued that40
utilitarian cars [2] (using the algorithm that favors the higher number of pedestrians41
over the fewer passengers), would have a lower commercial value as several stud-42
ies [2] indicate these utilitarian cars would have significantly less demand: consumers43
expect to invest in a vehicle that protects them. But certainly, the vast majority of44
human profess that autonomous vehicles should cause the least harm. So, manu-45
facturers would be required to implement a choice against the single passenger if46
injuring the passenger would cause less harm.47
Fig. 1 Most humans chose the first option, that is, what the vehicle should do is to “harm the
bystander”
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 27
Greene [7] argued that the problem is that humans offer a contradictory moral48
system. This self-contradicting value system is apparently impossible to encode in49
driverless vehicles. He believes the path forward would be to advance the human50
belief system to resolve such contradictions. This will naturally occur as the notion51
of car ownership will fade in favor of public transport systems in which the overall52
safety would be paramount, and any machine deciding between staying on course53
of sacrificing the passengers would only need to calculate the difference between54
the number of pedestrians versus the number of passengers. In many such scenarios55
where individuals are reluctant to act in favor of the common good, governments56
introduce regulations: mechanisms (penalties) that change the utility people perceive57
in selfish decisions. However, studies suggest [2] that while humans favor utilitarian58
cars, they do not support their forceful introduction. It is suggested that [2]themoral59
contradictions in humans could cause harm. In particular, all evidence suggests that60
autonomous vehicles would reduce fatalities on the road (as well as many other61
global benefits, like less pollution, less property loss, less wasteful traveling), but the62
reluctance of the public to regulation in favor of utilitarian cars may slow down the63
driverless-car adoption process.64
We suggest a potential solution inspired by game theory and mixed strategies.65
Our approach is utilitarian in that autonomous vehicles will decide based on a util-66
ity value assigned to the outcomes of the scenarios. As with previous studies, the67
outcomes of scenarios [2] are quantified by the number of lives (i.e., one passenger68
versus ten pedestrians). However, rather than the previous utilitarian approach that69
systematically chooses the sacrifice of the passenger, we propose that the choice70
would be such that, in one out of eleven instances, the passengers would be saved71
(we will see later why this is the probability in the scenario where the car is to choose72
between one passenger or ten pedestrians).73
Apparently, the number of moral decisions performed by autonomous cars would74
be small relative to the number of hours, the total number of passengers, the total75
trips, the total number of cars on the road, etc. But, it has been argued that despite76
those few occurrences of morally difficult scenarios, the algorithms for such decision77
require study and debate [2]. The nature of our proposal derives from refocusing the78
conditions that lead to the scenario. The design of the transportation system should79
be such that the facing of such decision is not the responsibility of previous decisions80
by the autonomous vehicle. We assume we cannot attach blame to the vehicle and81
the construction of this challenge is to be attributed to an adversarial environment.82
Moreover, for simplicity of argument, like others [2], we assume there is no other83
information. That is, the algorithm making the decision has no information about84
any other attribute that could be regarded as morally meritorious for a decision on85
less harm. Thus, nothing is known about the age of the potential victims, their roles86
in society, their previous criminal record, whether they violated pedestrian zones,87
etc. We consider an algorithm who can only obtain as input information Xnumber88
of lives versus Ynumber of lives.89
A scenario of one passenger versus 10 pedestrians can be represented with a90
game theoretic model, with one pure strategy, to sacrifice the passenger with utility91
1 and another pure strategy with utility 10. Naturally, because of this model, the92
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
28 V. Estivill-Castro
(pure) rational choice is to sacrifice the passenger. But if this were to be repeated93
and repeated a few times, the adversarial environment could systematically place94
the other elements of information that have moral value in a way that the utility95
choice is sub-optimal. In particular, does a passenger (who has committed no fault96
at all) ought to be sacrificed because ten drunk pedestrians walk in front of the car?97
Does such passenger deserve a role of the dice (even with the odds against 1:10)98
given that the car cannot perceive who is at fault? Or formulated in another way, do99
drivers of autonomous cars be systematically sacrificed over pedestrians when we100
have established that the only value is the number of lives? The systematic choice101
penalizes for no particular reason the passenger over the pedestrian just because102
pedestrians are in crowds. And the argument is symmetric: if we take the same103
systematic utilitarian algorithm, then one would be encouraged to be in cars with104
three or four passengers so when facing one or two pedestrians, the decision would105
certainly be in our favor. Car owners may be tempted to hire passengers for safer106
travel in systematic utilitarian cars.107
We suggest that if an autonomous vehicle arrives at a situation where it must108
decide between some number of human lives, it is still because of some human fault109
and not its own. However, the autonomous vehicle has no way to learn who is at110
fault; this is a choice made by the adversarial player, the environment. What is the111
decision here that cause the least harm possible? We suggest it is a mixed strategy112
as modeled in game theory.113
2 Machines Should Not Decide114
There are several authors and indeed formally outlined documents suggesting that115
machines should not be in a position to choose between one or another human life.116
Such classical approach117
views machines as not responsible for their actions under any circumstance because they118
are mechanical instruments or slaves’ [1].119
In fact, the recently released report by the German government has created the world’s120
first ethical guidelines for driverless cars. An examination of these guidelines suggests121
that the way forward is that machines should never be responsible for moral decisions.122
For example, the first guideline is the following123
1.- The primary purpose of partly and fully automated transport systems is to improve safety124
for all road users. Another purpose is to increase mobility opportunities and to make fur-125
ther benefits possible. Technological development obeys the principle of personal autonomy,126
which means that individuals enjoy freedom of action for which they themselves are respon-127
sible’ [3].128
This guideline makes a distinction between individuals (humans) as the ones respon-129
sible, since humans enjoy freedom of action (and machines are deprived of such free130
will [5]).131
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 29
Consider also the second guideline.132
2.- The protection of individuals takes precedence over all other utilitarian considerations.133
The objective is to reduce the level of harm until it is completely prevented. The licensing134
of automated systems is not justifiable unless it promises to produce at least a diminution in135
harm compared with human driving, in other words a positive balance of risks’ [3].136
Again, this suggest that the automated systems (machines/cars/computers) will pro-137
duce a deterministic outcome in each case (they will not be making a choice).138
The third guideline suggest that accidents should not happen. If they do, there is139
something to be done and the technology is to be improved and corrected. In any140
case, it is responsibility of the public sector (and not computers) to minimize risks.141
3.- The public sector is responsible for guaranteeing the safety of the automated and con-142
nected systems introduced and licensed in the public street environment. Drivingsystems thus143
need official licensing and monitoring. The guiding principle is the avoidance of accidents,144
although technologically unavoidable residual risks do not militate against the introduction145
of automated driving if the balance of risks is fundamentally positive [3].146
But the fifth guideline truly conveys the message that the machines are never to147
face a decision.148
Automated and connected technology should prevent accidents wherever this is practically149
possible. Based on the state of the art, the technology must be designed in such a way150
that critical situations do not arise in the firsts place. These include dilemma situations, in151
other words a situation in which an automated vehicle has to ‘decide’ which of two evils,152
between which there can be no trade-off, it necessarily has to perform. In this context, the153
entire spectrum of technological options for instance from limiting the scope of application154
to controllable traffic c environments, vehicle sensors and braking performance, signals for155
persons at risk, right up to preventing hazards by means of intelligent road infrastructure 156
should be used and continuously evolved ... [3].157
Clearly, it is the responsibility of designers of the traffic systems to ensure that158
the scenarios we have discussed never arise. Simply put: an autonomous vehicles159
should never have to chose between two situations that cause harm.160
In the event of harm being caused, the legal system and the records of the tragedy161
will be used to identify the human or humans’ responsible and potential liabilities162
could be applied. There is no transparent way in which the machines could be made163
to pay for the harm.164
not only the keepers and manufacturers of the vehicles but also the corresponding manufac-165
turers and operators of the vehicles assistance technologies have to be included in the system166
of liability sharing [3].167
1. Humans strongly support the principle of less harm.168
2. Humans strongly support that machines shall not decide.169
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
30 V. Estivill-Castro
3 Participants’ Responsability170
Thus, how is a robot/agent to resolve the following derivation?171
1. I am facing a decision to chose the life of one human being over the life of another172
human being.173
2. This situation should not have happened174
3. Therefore, some human is at fault.175
4. I cannot determine who is the one at fault.176
If the agent could determine who is at fault, should this affect the decision?177
We conducted a survey using SurveyMonkey.1We had 240 adult participants178
from the USA. When presented with a question that suggests the passengers in179
the autonomous vehicle are somewhat responsible for configuring the scenario that180
forces the machine to chose two evils, 72% of respondents consider this a mitigating181
fact that favors sacrificing the passengers.182
Similarly, when the pedestrians are presented as responsible for configuring the183
scenario that places the driverless car in the dilemma to chose lives, despite there184
are only two passengers, the majority of respondents 40.17% now indicates the car185
should continue its course and sacrifice the pedestrians (refer to Fig. 2). This contrast186
with the fact 71.2% in the same group of survey participants, preferred utilitarian187
cars. Their responses (to an earlier question where nothing was known about the188
conditions that lead to the scenario) have swung from sacrificing the passengers to189
sacrificing the pedestrians when the latter group is responsible for the situation.190
Therefore, if some humans are at fault and humans believe that those with less191
responsibility are to bare less the consequences of the tragedy, it is clear that least192
harm is to be mitigated. But the responsibility could be in either of the humans the193
machine is forced to cause harm. By causing harm to innocent individuals, there is194
a sensation that no the most congruent decision was made.195
4 Game Theory196
Game theory [4,12] is a mathematical framework to study conflict and cooperation197
between rational agents. This interactive decision theory models situations under198
the formalism of a game, and the challenge (or solution) is to determine the most199
profitable decision for each agent who also has this information. The solution is200
usually presented as the set of strategies each agent will use to maximize individual201
reward.202
Formally, a game consists of a set of participants named players,asetofstrate-203
gies (the choices) for each player, and a specification of payoffs (or utilities) for204
each combination of strategies. A common representation of a game is by its payoff205
matrices. A two-player normal form game Gconsists of two matrices A=(aij)m×n
206
1www.surveymonkey.com.
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 31
Fig. 2 Respondents favor saving less passengers when those responsible for the scenario are pedes-
trians
and B=(bij)m×n, where aij denotes the payoff for the first player and bij denotes207
the payoff for the second player when the first player plays his i-th strategy and the208
second player plays his j-th strategy. It is common to identify the first player as the209
row player and the second player as the column player. From very early in the devel-210
opment of this field, it was recognized that players may use mixed strategies; that is,211
a probability distribution over their set of possible strategies. In this case, the payoffs212
are the expected payoffs. Players are considered rational and aim to maximize their213
payoff which depends both on their own choices and also the choices of others. One214
of the proposed solution concepts for this situation is the Nash equilibrium,asetof215
strategies, one for each player, such that all players have no incentive to unilaterally216
change their decision (even if they were to become aware of the choices of others).217
Nash [13] proved that every game with a finite number of players and a finite number218
of strategies for each player has an equilibrium (Nash equilibrium) although such219
equilibrium may involve mixed strategies.220
Consider the suggested scenario of the earlier section. We model the software221
that selects the autonomous vehicle’s decision as the first (row) player, while the222
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
32 V. Estivill-Castro
environment is the second player choosing to place the blame on the car passengers223
or the pedestrians. The matrix for the row player is modeled as follows.224
the passanger
was at fault
the pedestrians
were at fault
car chooses to sacrifice passanger
car chooses to sacrifice pedestrians 01
10 0 (1)
225
That is, if the car chooses to sacrifice the one passenger when the arriving to this226
circumstance was the fault of the passenger, then sacrificing the passenger is taking227
no innocent life. However, if the ten pedestrians were those responsible for arriving228
to this scenario, then the car would be sacrificing one innocent life. Conversely, if229
the car chooses to sacrifice the pedestrians, who are innocent. This is a sacrifice of230
ten innocent lives, while if the fault was on the pedestrians, then no innocent lives231
were taken.232
What shall be the matrix for the environment? We consider a malicious faith that233
seeks to cause the most harm to humanity. If such malicious destiny sets this adverse234
scenario for taking advantage of a fault by the passenger, and the car sacrifices the235
passenger, there is no gain for the environment. However, if the car chooses the236
pedestrians, the environment causes a damage of ten innocent lives. Reasoning this237
way, we arrive at the following utility matrix for the environment.238
the passenger
was at fault
the pedestrians
were at fault
car chooses to sacrifice passenger
car chooses to sacrifice pedestrians 01
10 0 (2)
239
Games are usually represented by fusing the two matrices. We investigate whether240
there is a Nash equilibrium with pure strategies. We identify the best strategy for the241
autonomous car in each of the strategies of the environment. If the environment sets242
up a scenario with the passengers at fault, the best the car can do is sacrifice the243
passenger. If the environment sets up a scenario where the pedestrians are at fault,244
the best the car can do is to sacrifice the pedestrians.245
Now, we do the inverse for the environment. If the car always sacrifices the pas-246
senger, the environment should set a scenario where the pedestrians are at fault. If247
the car always saves the passenger (and sacrifices the pedestrians), the environment248
should set up a scenario where the pedestrians are innocent bystanders. By under-249
lining each player’s pure strategy, we notice that no common entry has both values250
underlined.251
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 33
passanger
at fault
pedestrians
at fault
passanger sacrificed
passanger saved 0,01,1
10,10 0,0(3)
252
This example illustrates the main claim of this paper. The current utilitarian cars253
only consider pure strategies, and these do not result in a Nash equilibrium. However,254
we know that every game has a Nash equilibrium by Nash’s Theorem. Therefore, we255
just need to compute it for this game.256
In a mixed Nash strategy equilibrium, each of the players must be indifferent257
between any of the pure strategies played with positive probability. If this were not258
the case, then there is a profitable deviation (play the pure strategy with higher payoff259
with higher probability).260
So, let us consider the environment. This player would set scenarios with the261
passenger at fault with probability pbut with the pedestrians at fault with probability262
1p. The car would be indifferent between the pure strategy (a) always sacrifice263
the passenger and (b) always save the passenger when his payoff for each are equal:264
0·p+(1)·(1p)[cost of (a)]=−10 ·p+0·(1p)[cost of (b)].(4)265
This means p1=−10 p11 p=1p=1
11 .Thus, the environment should266
set scenarios with the passenger at fault with probability p=1
11 , while with pedes-267
trians at fault with probability 10
11 . That way, a car that always sacrifices the passenger268
would lose 10
11 of a life. A car that always saves the passenger would lose 10
11 as well.269
The car would have no incentive to favor one pure strategy over the other.270
What is then the mixed strategy for the car? The car would choose to save the271
passenger with probability pand to sacrifice the passenger with probability 1 p.A272
symmetric exercise shows that the environment would not have preference between273
its two strategies of (a) creating a scenario with innocent pedestrians or (b) pedestrians274
who say jumped in front of the car when275
0(1p)+10 p[cost of (a)]=1(1p)+0p[cost of (b)](5)276
This equation has solution p=1
11 . Thus, the mixed strategy of the Nash equilibrium277
for the car is to save the passenger with probability p=1
11 while sacrificing the278
passenger with probability 10
11 .279
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
34 V. Estivill-Castro
5 Reflection280
What are the challenges of our proposal? Is it suitable that the design of autonomous281
vehicles resolves potential dilemmas by modeling such situations as game theory282
scenarios and computing the Nash equilibria?283
The first challenge that our proposal will face is the acceptability or understand-284
ability by humans of a mixed strategy. It has already been suggested that a robot’s285
non-deterministic behavior may become hard for humans to comprehend [1]. It has286
also been suggested that ethical agents would be required to generate justifications287
and explanations for their decision [18].288
In our survey, we found evidence that humans would find a non-deterministic289
robot’s decision puzzling. For example, despite that the overwhelming majority290
(87%) believe that six pedestrians who jumped over a barrier to cross the road in291
front of on coming vehicles are at fault, respondents are not so confident that the292
driverless car should use a non-deterministic choice (refer to Fig.3).293
Interestingly enough, when we remove the potential injury to passengers, and the294
choice is between a single bystander and six pedestrians in the expected trajectory295
of the autonomous car, the approval for probabilistic decision making is higher (but296
still divided with a deterministic choice). This result is illustrated in Fig.4.297
Fig. 3 Divided opinion on whether a non-deterministic choice is suitable
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 35
Fig. 4 Another scenario where the opinion remains divided on whether a non-deterministic choice
is suitable; however, since passengers are not involved, the profile is in favor of the probabilistic
choice
However, we reproduced the question regarding the likelihood of purchasing a298
utilitarian autonomous car where responses are recorded in a slider scale in the range299
[0, 100].300
How likely are you to purchase an autonomous vehicle that always sacrifices the passenger301
over a pedestrian where it is a one life to one life decision? Scale from 0 (would not buy) to302
100 (absolutely would buy).303
For this question, our results were congruent with previous results [2]. Namely people304
are in favor of the principle of least harm and its implementation in autonomous305
vehicles, but they would not purchase such a car.306
Two questions later we ask using the same [0, 100] slide what if the car where to307
take a probabilistic (mixed-strategy) choices.308
How likely are you to purchase an autonomous vehicle that always considers the ratio of309
harm that a decision will cost and makes the decision with probability as per such ratio?310
Scale from 0 (would not buy) to 100 (absolutely would buy) .311
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
36 V. Estivill-Castro
020 40 60 80 100
Deterministic
0 20406080100
Probabilistic
Fig. 5 Box plots contrasting responses regarding the likelihood respondents grade their purchase
of a deterministic versus a probabilistic decision in an utilitarian driverless vehicle
The difference is statistically significant in preferring the mixed strategy program-312
ming of the autonomous vehicle. Figure 5displays the box plots of the two sets of313
responses. The average value for purchasing a deterministic utilitarian car is 23.3314
while the scale jumps to 35.8 for the mixed strategy programming. The t-test using315
R[15]shows p-value =4.154e 05, and a 95% confidence interval for the differ-316
ence of the means is distinctive. That is, the difference of 35.8–23.3 =12.5 has a 95%317
probability of being in the range (6.6, 18.4).318
Thus, although respondents are somewhat unsure about the mechanism they seem319
willing to prefer it over a deterministic choice. They do value that the innocent should320
have some chance of avoiding the consequences of a tragic situation that it someone321
else’s responsibility.322
The primary point we are suggesting is adopting the belief that no machine should323
ever be placed in a position to chose between two options that cause harm to humans.324
Especially, if the machine can not establish what circumstances and course of events325
lead to the inevitable situation of causing harm. Again, any attempt to perform a326
judgment where responsibility could be attributed to some and the utility be adjusted327
accordingly are undesirable in the time frame available to make the decision. But re-328
searchers overwhelmingly accept that every introduction of technology occasionally329
has to lead to some fatalities and that the unforeseeable future situations autonomous330
vehicles will face would enact some unavoidable harm situations. Although after the331
event perhaps the responsibility of arriving at the harmful situation, if the agent does332
not have any evidence of such responsibility and has to act without it, we established333
here it cannot behave with a pure strategy. Such pure strategy utilitarian autonomous334
vehicles will be problematic.335
We propose here that it is possible for the public to understand that in choosing336
between ten or one life, the single life still has a vote, even if a minority vote when337
we cannot establish what lead to such scenario. We are currently running surveys338
investigating if humans could find the notion of a mixed strategy acceptable for339
autonomous vehicles.340
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Game Theory Formulation for Ethical Decision Making 37
However, even if the notion of a mixed strategy for such decision were to be341
understood (by humans who would find it more acceptable that pure strategy), there342
will be several issues for its implementation. The most immediate one would be how343
do we complete the matrices for the game? Would other attributes take precedence?344
For example, the collective age of the pedestrians versus the collective age of the345
passengers (and not just a count of human lives). The issue could be significantly346
more complicated, the car could have more than two choices, and computing Nash347
equilibria in large games is computationally intractable for some families of games.348
Since Nash’s paper was published, many researchers developed algorithms for find-349
ing Nash equilibria [6,10]. However, those algorithms are known to have worst-case350
running times that are exponential [14,17] (except for very few cases; for example,351
Nash equilibria in two-player zero-sum games [19] where one player’s loss is the352
opponent’s gain). Would restricting the approach to zero-sum games suffice to enable353
such computation?354
What if the randomization were to be removed out of the algorithm? That is,355
mixed strategies could be implemented with a random generator seeded with the356
nanoseconds of the CPU clock at some particular point also randomly selected at357
the release from manufacturing by spinning a physical wheel as it happens in many358
televised national Lotto raffles (where the public scrutinized the randomness of the359
event). It would be extremely hard to argue as the state of the passenger or the360
pedestrians that the mixed strategy was not adequately implemented. But what if the361
car manufacturer simplified this and every tenth accident, the fleet of its cars would362
save the passenger over the pedestrians? Who would be the entity to conceal that363
nine accidents already happened (and this tenth one would sacrifice the pedestrians364
for sure)?365
Bring along technologies like Big-Data and the Internet-of-Things. What if the366
car was driving at night, nothing to blame to the passengers, but we know (using big-367
data analytics) that most pedestrians invading the roads at night have abused alcohol?368
Should information modify the utilities placed into the matrices of the game? If such369
technologies were available to inform the decision process of the autonomous vehicle,370
would there be public pressure to incorporate them even if they became prohibitively371
expensive?372
Perhaps a simple comparison against human performance would suffice (but hu-373
man performance is also an issue [9]). Who is to say that humans in a split of a374
second can judge the number of lives of option Aversus option B? Perhaps data375
analytics would show that most human drivers are selfish and seldom chose to drive376
themselves into a wall rather than take some other humans’ lives. So, humans may377
accept to relegate the responsibility to machines accepting that statistically, such378
driverless cars cause less social harm that our own kind. Nevertheless, we remain379
convinced that the systematic (and by that, we mean pure strategy) decision making380
currently conceived for solving dilemmas by autonomous vehicles could consider a381
revision to incorporate mixed strategies.382
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
38 V. Estivill-Castro
References383
1. Alaieri F, Vellino A (2016) Ethical decision making in robots: autonomy, trust and responsi-384
bility. In: Agah A, Cabibihan JJ, Howard AM, Salichs MA, He H (eds) Social robotics: 8th385
international conference, ICSR, Springer International Publishing, Cham, pp 159–168386
2. Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science387
352(6293):1573–1576388
3. Di Fabio U, et al (2017) Ethics commission automated and connected driving. Technical report,389
Federal Ministry of Transport and Digital Infrastructure, Germany www.mbdi.de390
4. Diestel R (1997) Graph theory. Springer, New York391
5. Dodig-Crnkovic G, Persson D (2008) Sharing moral responsibility with robots: a pragmatic392
approach. In: Proceedings of the 2008 conference on tenth Scandinavian conference on artificial393
intelligence: SCAI 2008, IOS Press, Amsterdam, The Netherlands, pp 165–168394
6. Govindan S, Wilson R (2003) A global Newton method to compute Nash equilibria. J Econ395
Theory 110(1):65–86396
7. Greene JD (2016) Our driverless dilemma. Science 352(6293):1514–1515397
8. Hall JS (2011) Ethics for machines. In: Anderson M, Anderson SL (eds) Machine ethics (Chap.398
3). Cambridge University Press, Cambridge, pp 28–44399
9. Kadar EE, Köszeghy A, Virk GS (2017) Safety and ethical concerns in mixed human-robot400
control of vehicles. In: Aldinhas Ferreira MI, Silva Sequeira J, Tokhi MO, Kadar EE, Virk401
GS (eds) A world with robots: international conference on robot ethics: ICRE 2015. Springer402
International Publishing, Cham, pp 135–144403
10. Lemke CE, Howson JT (1964) Equilibrium points of bimatrix games. J SIAM 12(2):413–423404
11. Moore S (1999) Driverless cars should sacrifice their passengers for the greater good just not405
when I’m the passenger. The Conversation Media Group Ltd https://theconversation.com/406
driverless-cars-should- sacrifice-their-passengers-for-the- greater-good- just-not-when- im-407
the-passenger-61363408
12. Myerson RB (1997) Game theory: analysis of conflict. Harvard University Press, Cambridge,409
MA410
13. Nash JF (1950) Equilibrium points in N-Person games. Natl Acad Sci USA 36(1):48–49. http://411
www.pnas.org/content/36/1/48.full.pdf+html412
14. Porter R, Nudelman E, Shoham Y (2004) Simple search methods for finding a Nash equilib-413
rium. In: McGuinness DL, Ferguson G (eds) AAAI-04, 19th national conference on artificial414
intelligence, 16th conference on innovative applications of artificial intelligence, AAAI/MIT415
Press, San Jose, California, pp 664–669416
15. R Core Team (2013) R: a language and environment for statistical computing. R Foundation417
for Statistical Computing, Vienna, Austria http://www.R-project.org/418
16. Rahwan I (2017) What moral decisions should driverless cars make? TED talks, TED.com419
17. Savani R, von Stengel B (2004) Exponentially many steps for finding a nash equilibrium in a420
bimatrix game. In: FOCS-04, 45th annual ieee symposium on foundations of computer science.421
IEEE Computer Soc., pp 258–267422
18. Scheutz M, Malle BF (2014) Think and do the right thing - a plea for morally competent423
autonomous robots. In: 2014 IEEE international symposium on ethics in science, technology424
and engineering, pp 1–4425
19. von Stengel B (2002) Computing equilibria for two-person games. In: Aumann RJ, Hart S (eds)426
Handbook of game theory, vol 3 (Chap. 45). Elsevier, North-Holland, Amsterdam, pp 1723–427
1759428
20. Winfield AFT, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences429
and ethical action selection. In: Mistry M, Leonardis A, Witkowski M, Melhuish C (eds) Ad-430
vances in autonomous robotics systems - 15th Annual Conference, TAROS, vol 8717, Springer,431
LNCS, pp 85–96432
462563_1_En_4_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 38 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Beyond the Doctrine of Double Effect: A Formal Model of True Self-sacrifice
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Govindarajulu
Particle
Given Name Naveen Sundar
Prefix
Suffix
Role
Division RAIR Lab
Organization Department of Cognitive Science, Rensselaer Polytechnic Institute
Address New York, USA
Email govinn2@rpi.edu
Author Family Name Bringsjord
Particle
Given Name Selmer
Prefix
Suffix
Role
Division RAIR Lab
Organization Department of Cognitive Science, Rensselaer Polytechnic Institute
Address New York, USA
Division RAIR Lab
Organization Department of Computer Science, Rensselaer Polytechnic Institute
Address New York, USA
Email Selmer.Bringsjord@gmail.com
Author Family Name Ghosh
Particle
Given Name Rikhiya
Prefix
Suffix
Role
Division RAIR Lab
Organization Department of Computer Science, Rensselaer Polytechnic Institute
Address New York, USA
Email rikrixa@gmail.com
Author Family Name Peveler
Particle
Given Name Matthew
Prefix
Suffix
Role
Division RAIR Lab
Organization Department of Computer Science, Rensselaer Polytechnic Institute
Address New York, USA
Email matt.peveler@gmail.com
Abstract The doctrine of double effect ( ) is an ethical principle that can account for human judgment in
moral dilemmas: situations in which all available options have large good and bad consequences. We
have previously formalized in a computational logic that can be implemented in robots. , as an
ethical principle for robots, is attractive for a number of reasons: (1) Empirical studies have found that
is used by untrained humans; (2) many legal systems use ; and finally, (3) the doctrine is a
hybrid of the two major opposing families of ethical theories (consequentialist/utilitarian theories versus
deontological theories). In spite of all its attractive features, we have found that does not fully
account for human behavior in many ethically challenging situations. Specifically, standard fails in
situations wherein humans have the option of self-sacrifice. Accordingly, we present an enhancement of
our -formalism to handle self-sacrifice; we end by looking ahead to future work.
Keywords
(separated by '-')
Doctrine of double effect - True self-sacrifice - Law and ethics - Logic
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect:
A Formal Model of True Self-sacrifice
Naveen Sundar Govindarajulu, Selmer Bringsjord, Rikhiya Ghosh
and Matthew Peveler
Abstract The doctrine of double effect (DDE ) is an ethical principle that can
1
account for human judgment in moral dilemmas: situations in which all available2
options have large good and bad consequences. We have previously formalized3
DDE in a computational logic that can be implemented in robots. DDE ,asaneth-4
ical principle for robots, is attractive for a number of reasons: (1) Empirical studies5
have found that DDE is used by untrained humans; (2) many legal systems use6
DDE; and finally, (3) the doctrine is a hybrid of the two major opposing families of7
ethical theories (consequentialist/utilitarian theories versus deontological theories).8
In spite of all its attractive features, we have found that DDE does not fully account9
for human behavior in many ethically challenging situations. Specifically, standard10
DDE fails in situations wherein humans have the option of self-sacrifice. Accord-11
ingly, we present an enhancement of our DDE-formalism to handle self-sacrifice;12
we end by looking ahead to future work.13
Keywords Doctrine of double effect ·True self-sacrifice ·Law and ethics ·Logic14
N. S. Govindarajulu (B)·S. Bringsjord
RAIR Lab, Department of Cognitive Science, Rensselaer Polytechnic Institute,
New York, USA
e-mail: govinn2@rpi.edu
S. Bringsjord ·R. Ghosh ·M. Peveler
RAIR Lab, Department of Computer Science, Rensselaer Polytechnic Institute,
New York, USA
e-mail: Selmer.Bringsjord@gmail.com
R. Ghosh
e-mail: rikrixa@gmail.com
M. Peveler
e-mail: matt.peveler@gmail.com
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_5
39
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
40 N. S. Govindarajulu et al.
1 Introduction15
The doctrine of double effect (DDE ) is an ethical principle used (subconsciously16
or consciously) by humans in moral dilemmas, situations (put simply) in which all17
available options have both good and bad consequences, and hence it is difficult to18
know what to do. DDE states that an action αin such a situation is permissible19
iff —(1) it is morally neutral; (2) the net good consequences outweigh the bad con-20
sequences by a large amount; and (3) some of the good consequences are intended,21
while none of the bad consequences are. DDE is an attractive target for robot ethics22
for a number of reasons. Empirical studies show that DDE is used by untrained23
humans [7,12]. Secondly, many legal systems are based upon this doctrine. (For an24
analysis of DDE as used in US law, see [1] and [13].) In addition, DDE is a hybrid25
of the two major opposing families of ethical theories: consequentialist/utilitarian26
ones versus deontological ones. Despite these advantages, we have found that DDE27
does not fully account for human behavior in moral dilemmas. Specifically, standard28
DDE fails, for reasons to be explained later, in situations where humans have the29
option of self-sacrifice. In some of these situations, but not all, actions prohibited30
by DDE become acceptable when the receiver of harm is the self rather than some31
other agent.32
If we are to build robots that work with humans in ethically challenging scenarios33
(and sometimes in outright moral dilemmas) and operate in a manner that aligns34
folk-psychologically with human thinking and behavior, rigorously formalizing a35
version the doctrine that incorporates self-sacrifice is vital. The situation is made36
more complicated by the study in [15]; it shows, using hypothetical scenarios with37
imagined human and robot actors, that humans judge robots differently from how38
they judge humans in ethical situations. In order to build well-behaved autonomous39
systems that function in morally challenging scenarios, we need to build systems that40
not only take the right action in such scenarios, but also have enough representational41
capability to be sensitive to how others might view its actions. The formal system we42
present in this paper has been used previously to model beliefs of other agents and is43
uniquely suited for this. We present an enhancement of our prior DDE -formalism44
in order to handle self-sacrifice.1Our new formal model of self-sacrifice serves two45
purposes: (1) helps us build robots capable of self-sacrifice from first principles rather46
from manually programming in such behavior on an ad hoc case-by-case basis; and47
(2) detects when autonomous agents make real self-sacrifices rather than incidental48
or accidental self-sacrifices.49
1Full formalization of DDE would include conditions expressing the requirement that the agent in
question has certain emotions and lacks certain other emotions (e.g., the agent cannot have delectatio
morosa). On the strength of Ghosh’s Felmë theory of emotion, which formalizes (apparently all)
human emotions in the language of cognitive calculus as described in the present paper, we are
actively working in this direction.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 41
2 Prior Work50
While for millennia humanity has had legends, folk stories, and moral teachings51
on the value of self-sacrifice, very few empirical studies in moral psychology have52
explored the role of self-sacrifice. The most rigorous study of self-sacrifice to date,53
using the well-known trolley set of problems, has been done by Sachdeva et al. in54
[21]. They report that in the standard trolley class of problems, intended harm to55
oneself to save others is looked at more favorably than intended harm of others. This56
immediately catapults us beyond the confines of standard DDE. To account for this,57
we present an enhanced model of DDE by building upon our prior work [11]; the58
enhanced model can account for self-sacrifice.59
3 Standard DDE(Informal Version)60
We now present informally the standard version of DDE. Assuming that:61
We have available (i) an ethical hierarchy of actions as in the deontological case62
(e.g., forbidden, neutral, obligatory, heroic); see [4] and (ii) an utility function for63
states of the world or effects as in the consequentialist case, for an agent a, an action64
αin a situation σat time tis said to be DDE-compliant iff (from [11]):65
Informal Conditions for DDE
C1The action is not forbidden (where, again, we assume an ethical hierarchy
such as the one given by Bringsjord [4], and require that the action be
neutral or above neutral in such a hierarchy);
C2the net utility or goodness of the action is greater than some positive amount
γ;
C3athe agent performing the action intends only the good effects;
C3bthe agent does not intend any of the bad effects;
C4the bad effects are not used as a means to obtain the good effects; and
C5if there are bad effects, the agent would rather the situation be different and
the agent not have to perform the action.
66
4 Failure of Standard DDE67
With the informal setup above in hand, we proceed to render precise what is needed68
from a formal model of self-sacrifice; but we do this after we show how the standard69
version fails to model self-sacrifice. Consider the following two options in a moral70
dilemma:71
O1unintended, but foreseen, self-harm used as a means for a greater good72
O2unintended, but foreseen, harm of others used as a means for a greater good73
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
42 N. S. Govindarajulu et al.
As mentioned above, empirical studies of human judgement in moral dilemmas74
show that O1is judged to be much more preferable than O2. If one is building a self-75
driving car or a similar robotic system that functions in limited domains, it might be76
“trivial” to program in the self-sacrifice option O1, but we are seeking to understand77
and formalize what a model of self-sacrifice might look like in general-purpose78
autonomous robotic systems that can arrive at O1automatically and understand O1’s79
being employed by other agents. Toward this end, consider a sample scenario: A team80
of n(n>=2)soldiers from the blue team is captured by the red team.2The leader81
of the blue team is offered the choice of selecting one member from the team who82
will be sacrificed to free the rest of the team. Now consider the following actions:83
a1The leader lpicks himself/herself.84
a2The leader picks another soldier sagainst their will.85
a3The leader chooses a name randomly and it happens to be the leader’s name.86
a4The leader chooses a name randomly and it happens to be the name of a soldier s.87
a5A soldier svolunteers to die; the leader (non-randomly) picks their name.88
a6The leader picks the name of a soldier sthat the leader wants to see killed.89
90
The table below shows the different options above being analyzed through the91
different clauses in DDE3:92
Allowed
Scenario C1C2C3C4DDE Empirically
a1✓✓✓
a2✓✓✓
a3✓✓✓
a4✓✓✓
a5✓✓✓
a6✓✓
93
Only a1and a5, which involve true self-sacrifice, are empirically allowed. a3is acci-94
dental self-sacrifice; and a2might be immoral. a4and a6are close to options available
2The blue/red terminology is common in wargaming and offers in the minds of many a somewhat
neutral way to talk about politically charged situations.
3We leave out the counterfactual condition C5as it is typically excluded in standard treatments of
DDE.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 43
in standard moral dilemmas and are prohibited by DDE. The table above shows that95
standard DDE treats the true self-sacrifice options similar to the other options and96
prohibits them. Our DDE extension modifies C4so that a1and a5are allowed.97
5 The Calculus98
The computational logic we use is the deontic cognitive event calculus (DCE C ),99
augmented with support for self-reference, resulting in the logic DCEC.Wepre-100
viously used DCEC in [11] to model and automate DDE . While describing the101
calculus in any detail is of necessity beyond the scope of this paper, we give a quick102
overview of the system. Dialects of DCE C have been used to formalize and au-103
tomate highly intensional reasoning processes, such as the false-belief task [2] and104
akrasia (succumbing to temptation to violate moral principles) [6]. Arkoudas and105
Bringsjord [2,3] introduced the general family of cognitive event calculi to which106
DCE C belongs, by way of their formalization of the false-belief task.107
DCEC is a sorted (i.e., typed) quantified modal logic (also known as sorted108
first-order modal logic) that includes the event calculus, a first-order calculus used109
for commonsense reasoning. A sorted system is analogous to a typed programming110
language. We show below some of the important sorts used in DCE C . Among111
these, the Agent,Action, and ActionType sorts are not native to the event calculus.4
112
Briefly, actions are events that are carried out by an agent. For any action type αand113
agent a, the event corresponding to acarrying out αis given by action(a).For114
instance, if αis “running” and ais “Jack” ,action(a)denotes “Jack is running”.115
Sort Description
Agent Human and non-human actors
Time The Time type stands for time in the domain. For example, simple, such as ti,
or complex, such as birthday(son(jack))
Event Used for events in the domain
ActionType Action types are abstract actions. They are instantiated at particular times
by actors. For example, eating
Action A subtype of Event for events that occur as actions by agents
Fluent Used for representing states of the world in the event calculus
5.1 Syntax116
The syntax has two components: a first-order core and a modal system that builds117
upon this first-order core. The figures below show the syntax and inference schemata118
of DCEC. The syntax is quantified modal logic. The first-order core of DC E C is the119
4Technically, in the inaugural [2,3], the straight event calculus is not used, but is enhanced, and
imbedded within common knowledge, the operator for which is C.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
44 N. S. Govindarajulu et al.
event calculus [17]. Commonly used function and relation symbols of the event cal-120
culus are included. Other calculi (e.g., the situation calculus) for modeling common-121
sense and physical reasoning can be easily switched out in place of the event calculus.122
The modal operators present in the calculus include the standard operators for123
knowledge K, belief B, desire D, intention I, etc. The general format of an intensional124
operator is K(a,t
), which says that agent aknows at time tthe proposition φ.Here125
φcan in turn be any arbitrary formula. Also, note the following modal operators: Pfor126
perceiving a state, Cfor common knowledge, Sfor agent-to-agent communication127
and public announcements, Bfor belief, Dfor desire, Ifor intention, and finally and128
crucially (esp. in the present paper), a dyadic deontic operator Othat states when an129
action is obligatory or forbidden for agents. It should be noted that DCE C is one130
specimen in a family of easily extensible cognitive calculi.131
As stated, the calculus includes a dyadic (arity = 2) deontic operator O.Itis132
well known that the unary ought in standard deontic logic leads to contradictions133
(e.g., Chisholm’s Paradox). Our dyadic version of the operator, in tandem with other134
highly expressive machinery in cognitive calculi, blocks the standard list of such135
contradictions, and beyond.5
136
Syntax
S::= Agent |ActionType |Action Event |Moment |Fluent
f::=
action :Agent ×ActionType Action
initially :Fluent Formula
holds :Fluent ×Moment Formula
happens :Event ×Moment Formula
clipped :Moment ×Fluent ×Moment Formula
initiates :Event ×Fluent ×Moment Formula
terminates :Event ×Fluent ×Moment Formula
prior :Moment ×Moment Formula
t::= x:S|c:S|f(t1,...,tn)
φ::=
q:Formula φ|φψ|φψ|∀x:φ(x)|
P(a,t) |K(a,t,φ)|
C(t) |S(a,b,t,φ)|S(a,t,φ)|B(a,t)
D(a,t) |I(a,t,φ)
O(a,t,(¬)happens(action(a),t))
137
The above syntax lets us formalize statements of the form “John believes now138
that Mary desires now that it snow on Monday.” One formalization could be:139
Example 1
Bjohn,now,Dmary,now,holds(snow, monday)
140
5A overview of this list is given lucidly in [16].
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 45
5.2 Inference Schemata141
Inference schemata for DCE C are based on natural deduction [10], and includes142
all the standard introduction and elimination rules for zero- and first-order logic, as143
well as inference schemata for the modal operators and related structures.144
The figure below shows a fragment of the inference schemata for DCE C .IK
145
and IBare inference schemata that let us model idealized agents that have their146
knowledge and belief closed under the DCE C proof theory.6While normal humans147
are not deductively closed, this lets us model more closely how deliberate agents such148
as organizations and more strategic actors reason. (Some dialects of cognitive calculi149
restrict the number of iterations on intensional operators; see note 5.) I1and I2state,150
respectively, that it is common knowledge that perception leads to knowledge, and151
that it is common knowledge that knowledge leads to belief. I3lets us expand out152
common knowledge as unbounded iterated knowledge. I4states that knowledge of a153
proposition implies that the proposition holds. I5to I10 provide for a more restricted154
form of reasoning for propositions that are common knowledge, unlike propositions155
that are known or believed. I12 states that if an agent scommunicates a proposition156
φto h, then hbelieves that sbelieves φ.I14 dictates how obligations get translated157
into intentions.158
Inference Schemata
K(a,t1,Γ), Γ φ, t1t2
K(a,t2) [IK]B(a,t1,Γ), Γ φ, t1t2
B(a,t2) [IB]
C(t,P(a,t)K(a,t,φ)) [I1]C(t,K(a,t,φ)B(a,t,φ)) [I2]
C(t)tt1...ttn
K(a1,t1,...K(an,tn)...) [I3]K(a,t,φ)
φ[I4]
C(t,K(a,t1
1φ2)) K(a,t2
1)K(a,t3
2)[I5]
C(t,B(a,t1
1φ2)) B(a,t2
1)B(a,t3
2)[I6]
C(t,C(t1
1φ2)) C(t2
1)C(t3
2)[I7]
C(t,xφ[x→ t])[I8]C(t
1φ2→¬φ2→¬φ1)[I9]
C(t,[φ1...φnφ]→[φ1... φnψ])[I10]
S(s,h,t)
B(h,t,B(s,t)) [I12 ]I(a,t,happens(action(a,α),t))
P(a,t,happens(action(a),t)) [I13 ]
B(a,t) B(a,t,O(a,t,φ,χ)) O(a,t,χ)
K(a,t,I(a,t,χ)) [I14]
159
6Placing limits on the layers of any intensional operators is easily regimented. See [2,3].
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
46 N. S. Govindarajulu et al.
5.3 Semantics160
The semantics for the first-order fragment is the standard first-order semantics.161
Hence, the truth-functional connectives ,,,¬, and quantifiers ,for pure162
first-order formulae, all have the standard first-order semantics.7The semantics of163
the modal operators differs from what is available in the so-called Belief–Desire–164
Intention (BDI) logics [20] in many important ways. For example, DCE C explicitly165
rejects possible-worlds semantics and model-based reasoning, instead opting for a166
proof-theoretic semantics and the associated type of reasoning commonly referred167
to as natural deduction [9,10]. Briefly, in this approach, meanings of modal op-168
erators are defined via arbitrary computations over proofs, as we will see for the169
counterfactual conditional below.170
6 Introducing DCE C
171
Modeling true self-sacrifice (as opposed to accidental self-sacrifice as discussed172
above) needs a robust representation system for true self-reference (called de se ref-173
erence in the philosophical-logic literature). We now briefly explain how we can174
represent increasingly stronger levels of self-reference in DCE C , with de se state-175
ments being the only true self-referential statements. See [5] for a more detailed176
presentation of the system we used here and an analysis of de dicto (“about the177
word”), de re (“about the object”), and de se statements (“about the self”). We have178
three levels of self-reference, discussed below in the box titled Three Levels of179
Self-Representation.” For representing and reasoning about true self-sacrifice, we180
need a Level 3 (de se) representation. Assume we have a robot or agent rwith a181
knowledge base of formulae Γ.182
Level 1 representation dictates that the agent ris aware of a name or description183
νreferring to some agent a. It is with the help of νthat the agent comes to believe184
a statement φ(a)about the particular agent (which happens to be itself, r=a).185
The agent need not be necessarily aware that r=a. Level 1 statements are not186
true self-referential beliefs. This is equivalent to a person reading and believing a187
statement about themself that uses a name or description that they do not know refers188
to themself. For example, the statement “the nth tallest person in the world is taller189
than the n+1th person” can be known by the nth tallest person without that person190
knowing that they are in fact the nth tallest person in the world, and that the statement191
is about this person.192
7More precisely, we allow such formulae to be interpreted in this way. Strictly speaking, even the
“meaning” of a material conditional such as ψ) ψ, in our proof-theoretic orientation, is
true because this conditional can be proved to hold in “background logic.” Readers interested in
how background logic appears on the scene immediately when mathematical (extensional deductive)
logic is introduced are encouraged to consult [8].
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 47
Three Levels of Self-Representation
de dicto Agent rwith the name or description νhas come to believe on the basis of prior
information Γthat the statement φholds for the agent with the name or description ν.
ΓrBIr,now,a:Agent namedaφa
de re Agent rwith the name or description νhas come to believe on the basis of prior
information Γthat the statement φholds of the agent with the name or description ν.
a:Agent named (a)ΓrBIr,now
a
de se Agent rbelieves on the basis of Γthat the statement φholds of itself ν.
ΓrBIr,now
Ir
193
Level 2 representation does not require that the agent be aware of the name. The194
agent knows that φholds for some anonymous agent a. The below representation does195
not dictate that the agent be aware of the name. Following the previous example, the196
statement “that person is taller than the n+1th person”, where “that person” refers197
to the nth tallest person, can be known by the nth tallest person without knowing that198
they are in fact the nth tallest person in the world and that the statement is about them.199
Level 3 representation is the strongest level of self-reference. The special function200
denotes a self-referential statement. We refer the reader to [5] for a more detailed201
analysis. Following the above two examples, this would correspond to the statement202
“I myself am taller than the n+1th person” believed by the nth tallest person (Fig. 1).203
Reasoner (Theorem Prover)204
Reasoning is performed through ShadowProver, a first-order modal logic theorem205
prover, first used in [11]. The prover builds upon a technique called shadowing to206
achieve speed without sacrificing consistency in the system.8
207
7InformalDDE
208
We now informally but rigorously present DDE, an enhanced version of DDE209
that can handle self-sacrifice. Just as in standard models of DDE, assume we have210
at hand an ethical hierarchy of actions as in the deontological case (e.g., forbidden,211
neutral, obligatory); see [4]. Also given to us is an agent-specific utility function or212
goodness function for states of the world or effects as in the consequentialist case.213
The informal conditions are from [11]; the modifications are emphasized in bold
8The prover is available in both Java and Common Lisp and can be obtained at: https://github.com/
naveensundarg/prover. The underlying first-order prover is SNARK, available at: http://www.ai.
sri.com/~stickel/snark.html.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
48 N. S. Govindarajulu et al.
The second tallest
person is shorter than
the tallest person.
That person on the right is
shorter than the tallest
person.
I myself am shorter than the
person on the right.
de dicto
de re
de dicto
The second tallest
person
That person
I
myself
Deeper self-reference
Fig. 1 Three levels
below. For an autonomous agent a, an action αin a situation σat time tis said to be214
DDE-compliant iff :215
Informal Conditions for DDE
C1the action is not forbidden (where we assume an ethical hierarchy such as
the one given by Bringsjord [4], and require that the action be neutral or
above neutral in such a hierarchy);
C2the net utility or goodness of the action is greater than some positive amount
γ;
C3athe agent performing the action intends only the good effects;
C3bthe agent does not intend any of the bad effects;
C4the bad effects are not used as a means to obtain the good effects [unless a
knows that the bad effects are confined to only aitself]; and
C5if there are bad effects, the agent would rather the situation be different and
the agent not have to perform the action; that is, the action is unavoidable.
216
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 49
8OverviewofFormalDDE217
We now give a quick overview of the self-sacrifice-free version of DDE.LetΓ218
be a set of background axioms. Γcould include whatever the given agent under219
consideration knows and believes about the world. This could include, e.g., its un-220
derstanding of the physical world, knowledge, and beliefs about other agents and221
itself, its beliefs about its own obligations, its desires, etc. The particular situation222
or context that might be in play, e.g., “I am driving,, is represented by a formula σ.223
The formalization uses ground fluents for effects.224
8.1 The means Operator225
Standard event calculus does not have any mechanism to denote when an effect is226
used as a means for another effect. Intuitively, we could say an effect e1is a mere227
side effect for achieving another effect e2if by removing the entities involved in e1
228
we can still achieve e2; otherwise we say e1is a means for e2. A new modal operator229
,means is introduced in [11] to capture this notion.9The signature for is given230
below:231
:Formula ×Formula Formula
232
The notation below states that, given Γ, a fluent fholding true at t1causes or is used233
as a means for another fluent gat time t2.234
Γholdsf,t1,holdsg,t2
235
8.2 The Formalization236
Given the machinery defined above, we now proceed to the formalization defined in237
terms of a predicate: DDE (Γ,σ,a,t,H). Assume, for any action type αcarried238
out by an agent aat time t, that it initiates the set of fluents αa,t
Iand terminates the239
set of fluents αa,t
T. Then, for any action αtaken by an autonomous agent aat time240
twith background information Γin situation σ, the action adheres to the doctrine241
of double effect up to a given time horizon H, that is DDE (Γ,σ,a,t,H)iff the242
conditions below hold:243
9The definition of is inspired by Pollock’s [19] treatment, and while similarities can be found to
the approach in [18], we note that this definition requires at least first-order logic.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
50 N. S. Govindarajulu et al.
Formal Conditions for DDE
F1αcarried out at tis not forbidden. That is:
Γ¬Oa,t,¬happensaction(a),t
F2The net utility is greater than a given positive real γ:
Γ
H
y=t+1
fαa,t
I
μ( f,y)
fαa,t
T
μ( f,y)
F3a The agent aintends at least one good effect. (F2should still hold after removing all
other good effects.) There is at least one fluent fgin αa,t
Iwith μfg,y>0, or fbin
αa,t
Twith μ(fb,y)<0, and some ywith t<yHsuch that the following holds:
Γ
fgαa,t
IIa,t,holdsfg,y
fbαa,t
TIa,t,¬holdsfb,y
F3b The agent adoes not intend any bad effect. For all fluents fbin αa,t
Iwith μ(fb,y)<0,
or fgin αa,t
Twith μfg,y>0, and for all ysuch that t<yH, the following holds:
ΓIa,t,holdsfb,yand
ΓIa,t,¬holdsfg,y
F4The harmful effects don’t cause the good effects. Four permutations, paralleling the
definition of above, hold here. One such permutation is shown below. For any bad fluent
fbholding at t1, and any good fluent fgholding at some t2, such that t<t1,t2H,
the following holds:
Γ¬holdsfb,t1,holdsfg,t2
F5This clause requires subjunctive reasoning. The current formalization ignores this
stronger clause. There has been some work in computational subjunctive reasoning
that we hope to use in the future; see [19].
244
9FormalDDE
245
Central to the formalization of DDE is a utility function μthat maps fluents and246
time points to utility values.247
μ:Fluent ×Time R
248
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 51
Good effects are fluents with positive utility; bad effects are fluents that have negative249
utility. Zero-utility fluents could be neutral fluents (which do not have a use at the250
moment). The above agent-neutral function suffices for classical DDE but is not251
enough for our purpose. We assume that there is a another function κ(either learned252
or given to us) that gives us agent-specific utilities.253
κ:Agent ×Fluent ×Time R
254
We can then build the agent-neutral function μfrom the agent-specific function νas255
shown below:256
μ( f,t)=
a
κ(a,f,t)
257
For an action αcarried out by an agent aat time t,letαa,t
Ibe the set of fluents258
initiated by the action and let αa,t
Tbe the set of fluents terminated by the action. If259
we are looking up till horizon H, then ˆμ(α, a,t), the total utility of action αcarried260
out by aat time t, is then:261
ˆμ(α, a,t)=
H
y=t+1
fαa,t
I
μ( f,y)
fαa,t
T
μ( f,y)
262
Similarly, we have ν(α,a,b,t), the total utility for agent bof action αcarried out263
by agent aat time t:264
ν(α, a,b,t)=
H
y=t+1
fαa,t
I
ˆμ(b,f,y)
fαa,t
T
ˆμ(b,f,y)
265
Assume we have an autonomous agent or robot rwith a knowledge base Γ.In266
[11], the predicate DDE , σ, a,t,H)is formalized—and is read as from a set267
of premises Γ,and in situation σ,we can say that action αby agent aat time t268
operating with horizon His DDE -compliant.” The formalization is broken up into269
four clauses corresponding to the informal clauses C1C4given above in Section 7:270
DDE , σ, a,t,H)F1(Γ, σ, a,t,H)F2(. . .) F3(...)F4(...)
271
With the formal machinery now at hand, enhancing DDE to DDEis straightfor-272
ward. Now, corresponding to the augmented informal definition in Section 7,wetake273
the DDE predicate defined in [11] and form a disjunction.274
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
52 N. S. Govindarajulu et al.
Formal Conditions for DDE
DDE(...)
DDE (Γ,σ,a,t,H)
F1F3F4
K
a,t,
b.b= aν(α, a,b,t)0
ν(α, a,a,t)0
275
The disjunction simply states that the new principle DDEapplies when: 1. DDE276
applies; or 2. when conditions F1,F3, and F4apply along with the condition that277
the agent performing the action knows that all of the bad effects are directed toward278
itself, and that the good effects are great in magnitude and apply only to other agents.279
10 Simulations280
We formalize now the standard trolley scenario [11] adding the option of sacrificing281
oneself. In this scenario, there is a train hurtling toward n(n2)persons on a track.282
Agent a, on a bridge above the fateful track, has the option of pushing a spectator b283
onto it, which would stop the train and hence prevent it from killing the npersons.284
Standard DDE prevents pushing either aor b, but empirical evidence suggests that285
while humans do not, morally speaking, condone pushing b, they find it agreeable that286
asacrifices his/her own life. We take the formalization of the base scenario without287
options for self-sacrifice, represented by a set of formulae ΓTrolley,bridge, and add an288
action that describes the action of self-sacrifice; this gives us Γ
Trolley,bridge.Wesimu-289
late DDEusing ShadowProver. The table below summarizes some computational290
statistics.10
291
Simulation time (s)
Scenario |Γ|DDE (push b)DDE(push a)
ΓTrolley,bridge38 [] 1.48 (s) not applicable
Γ
Trolley,bridge39 [] 3.37 (s) [] 3.37 + 0.2 = 3.57 (s)
10The code is available at https://goo.gl/JDWzi6. For further experimentation with and exploration
of DDE, we are working on physical, 3D simulations, rather than only virtual simulations in pure
software. Space constraints make it impossible to describe the “cognitive polysolid framework”
in question (which can be used for simple trolley problems), development of which is currently
principally the task of Matt Peveler.
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Beyond the Doctrine of Double Effect: A Formal 53
11 Conclusion292
As our DDEmodel builds upon a prior, robust computational model of DDE,the293
new model can be readily automated. While the new model can explain the results294
in [21], we have not yet explored or applied this model to more elaborate cases that295
we concede are encountered by humans outside the laboratory. Such exploration, if296
the promising results obtained thus far are to be sustained, will be challenging, as297
real-world cases are guaranteed to be demanding in a number of ways (e.g., the sheer298
amount of declarative content to be reasoned over quickly will increase). For future299
work, we will look at applying DDEto a slew of such cases, and in addition we300
shall explore self-sacrifice in other, related ethical principles, such as the doctrine301
of triple effect [14].302
Acknowledgements The research described above has been in no small part enabled by gener-303
ous support from ONR (morally competent machines and the cognitive calculi upon which they304
are based) and AFOSR (unprecedentedly high computational intelligence achieved via automated305
reasoning), and we are deeply grateful for this funding.306
References307
1. Allsopp ME (2011) The doctrine of double effect in US law: exploring neil gorsuch’s analyses.308
Natl Cathol Bioeth Q 11(1):31–40309
2. Arkoudas K, Bringsjord S (2008) Toward formalizing common-sense psychology: an analysis310
of the false-belief task. In: Ho TB, Zhou ZH (eds) Proceedings of the tenth pacific rim in-311
ternational conference on artificial intelligence (PRICAI 2008), Springer-Verlag, no. 5351 in312
Lecture Notes in Artificial Intelligence (LNAI), pp 17–29. http://kryten.mm.rpi.edu/KA_SB_313
PRICAI08_AI_off.pdf
314
3. Arkoudas K, Bringsjord S (2009) Propositional attitudes and causation. Int J Softw Inform315
3(1):47–65. http://kryten.mm.rpi.edu/PRICAI_w_sequentcalc_041709.pdf316
4. Bringsjord S (2017) A 21st-century ethical hierarchy for robots and persons: EH. In: A world317
with robots: international conference on robot ethics: ICRE 2015, Springer, Lisbon, Portugal,318
vol 84, p 47319
5. Bringsjord S, Govindarajulu NS (2013) Toward a modern geography of minds, machines,320
and math. In: Müller VC (ed) Philosophy and theory of artificial intelligence, studies in ap-321
plied philosophy, epistemology and rational ethics, vol 5, Springer, New York, NY, pp 151–322
165. https://doi.org/10.1007/978-3- 642-31674-6_11,http://www.springerlink.com/content/323
hg712w4l23523xw5324
6. Bringsjord S, Govindarajulu NS, Thero D, Si M (2014) Akratic robots and the computational325
logic thereof. In: Proceedings of ETHICS 2014 (2014 IEEE symposium on ethics in engineer-326
ing, science, and technology), Chicago, IL, pp 22–29. IEEE Catalog Number: CFP14ETI-POD327
7. Cushman F, Young L, Hauser M (2006) The role of conscious reasoning and intuition in moral328
judgment testing three principles of harm. Psychol Sci 17(12):1082–1089329
8. Ebbinghaus HD, Flum J, Thomas W (1994) Mathematical logic, 2nd edn. Springer-Verlag,330
New York, NY331
9. Francez N, Dyckhoff R (2010) Proof-theoretic semantics for a natural language fragment.332
Linguist Philos 33:447–477333
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
54 N. S. Govindarajulu et al.
10. Gentzen G (1935) Investigations into Logical Deduction. In: Szabo ME (ed) The collected
334
papers of Gerhard Gentzen, North-Holland, Amsterday, The Netherlands, pp 68–131, This is335
an English version of the well-known 1935 German version336
11. Govindarajulu NS, Bringsjord S (2017) On automating the doctrine of double effect. In: Pro-337
ceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI338
2017), Melbourne, Australia, preprint available at this https://arxiv.org/abs/1703.08922339
12. Hauser M, Cushman F, Young L, Kang-Xing Jin R, Mikhail J (2007) A dissociation between340
moral judgments and justifications. Mind Lang 22(1):1–21341
13. Huxtable R (2004) Get out of jail free? the doctrine of double effect in english law. Palliat Med342
18(1):62–68343
14. Kamm FM (2007) Intricate ethics: rights, responsibilities. Oxford University Press, New York,344
New York and Permissible Harm345
15. Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of346
many?: people apply different moral norms to human and robot agents. In: Proceedings of the347
tenth annual ACM/IEEE international conference on human-robot interaction, ACM, Portland,348
USA, pp 117–124349
16. McNamara P (2014) Deontic logic. In: Zalta EN (ed) The stanford encyclopedia of philosophy,350
winter, 2014th edn. Stanford University, Metaphysics Research Lab351
17. Mueller E (2006) Commonsense reasoning: an event calculus based approach. Morgan Kauf-352
mann, San Francisco, CA, This is the first edition of the book. The second edition was published353
in 2014354
18. Pereira LM, Saptawijaya A (2016) Counterfactuals, logic programming and agent morality. In:355
Rahman S, Redmond J (eds) Logic. Springer, Argumentation and Reasoning, pp 85–99356
19. Pollock J (1976) Subjunctive Reasoning. D. Reidel, Dordrecht, Holland & Boston, USA357
20. Rao AS, Georgeff MP (1991) Modeling rational agents within a BDI-architecture. In: Fikes358
R, Sandewall E (eds) Proceedings of knowledge representation and reasoning (KR&R-91),359
Morgan Kaufmann, San Mateo, CA, pp 473–484360
21. Sachdeva S, Iliev R, Ekhtiari H, Dehghani M (2015) The role of self-sacrifice in moral dilem-361
mas. PloS one 10(6):e0127,409362
462563_1_En_5_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 54 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Mind the Gap: A Theory Is Needed to Bridge the Gap Between the Human Skills and Self-driving Cars
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Kadar
Particle
Given Name E. E.
Prefix
Suffix
Role
Division Department of Psychology
Organization University of Portsmouth
Address Portsmouth, UK
Email kadar_e@yahoo.co.uk
Abstract In designing robots for safe and ethically acceptable interaction with humans, engineers need to understand
human behaviour control including social interaction skills. Automated systems with the option of mixed
control constitute an important subclass of these design problems. These designs imply basic interaction
skills because an automatic controller should be similar to human-like controller; otherwise, the human and
artificial agent (controller) could not understand/interpret each other in their interaction. A popular
research area for mixed control is to develop self-driving cars that are able to safely participate in normal
traffic. Vehicular control should be ethical, that is human-like to avoid confusing pedestrians, passengers
or other human drivers. The present paper provides insights into the difficulties of designing autonomous
and mixed vehicle control by analysing drivers’ performance in curve negotiation. To demonstrate the
discrepancy between human and automated control systems, biological and artificial design principles are
contrasted. The paper discusses the theoretical and ethical consequences of our limited understanding of
human performance by highlighting the gap between the design principles of biological and artificial/
robotic performance. Nevertheless, we can conclude with a positive note by emphasizing the benefits of
the robustness of human driving skills in developing mixed control systems.
Keywords
(separated by '-')
Safe and ethical design of control - Artificial control - Natural control - Mixed control - Interaction skills -
Self-driving cars - Visual control - Drivers’ gaze - Optic flow - Perceptual invariants
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed
to Bridge the Gap Between the Human
Skills and Self-driving Cars
E. E. Kadar
Abstract In designing robots for safe and ethically acceptable interaction with
1
humans, engineers need to understand human behaviour control including social2
interaction skills. Automated systems with the option of mixed control constitute an3
important subclass of these design problems. These designs imply basic interaction4
skills because an automatic controller should be similar to human-like controller;5
otherwise, the human and artificial agent (controller) could not understand/interpret6
each other in their interaction. A popular research area for mixed control is to develop7
self-driving cars that are able to safely participate in normal traffic. Vehicular con-8
trol should be ethical, that is human-like to avoid confusing pedestrians, passengers9
or other human drivers. The present paper provides insights into the difficulties of10
designing autonomous and mixed vehicle control by analysing drivers’ performance11
in curve negotiation. To demonstrate the discrepancy between human and automated12
control systems, biological and artificial design principles are contrasted. The paper13
discusses the theoretical and ethical consequences of our limited understanding of14
human performance by highlighting the gap between the design principles of bio-15
logical and artificial/robotic performance. Nevertheless, we can conclude with a16
positive note by emphasizing the benefits of the robustness of human driving skills17
in developing mixed control systems.18
Keywords Safe and ethical design of control ·Artificial control ·Natural19
control ·Mixed control ·Interaction skills ·Self-driving cars ·Visual control ·20
Drivers’ gaze ·Optic flow ·Perceptual invariants21
1 Introduction22
In designing robots whose interaction with humans is safe and ethically acceptable,23
engineers need to understand human perception and action in behaviour control24
including those skills that are needed in interaction. In other words, fluent human–25
E. E. Kadar (B)
Department of Psychology, University of Portsmouth, Portsmouth, UK
e-mail: kadar_e@yahoo.co.uk
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_6
55
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
56 E. E. Kadar
robot interaction requires profound understanding of human social and non-social26
interaction skills. Automated systems with the option of mixed control provide an27
important subclass of these design problems. While complex social interactions28
include withdrawal and suspension of action, non-action is often not an option (e.g.29
in using a self-driving car the driver should supervise the control all the time). These30
control designs also imply basic interaction skills because the performance of an auto-31
matic controller should be similar to human-like behaviour; otherwise, the human32
controller would have difficulties in detecting the need to take over the control when33
a possible error/malfunctioning of the automatic control mechanism occurs. Sim-34
ilarly, the automatic controller (artificial agent) should be able to monitor human35
performance in order to warn the human agent and/or take over the control when36
obvious human errors are detected.37
A popular research area for designing a mixed control system is to develop self-38
driving cars that are able to safely participate in normal traffic. To achieve safe39
and ethically acceptable performance, the vehicular control should be human-like40
to avoid confusing the driver, passengers, pedestrians and other human drivers and41
causing unwanted stress. The present paper provides insights into the difficulties of42
designing autonomous and mixed vehicle control by analysing drivers’ performance43
in curve negotiation. To demonstrate the discrepancy between humans and artificial44
control systems, biological and artificial design principles are contrasted [2,5,26].45
The source of this concern can be linked to Husserl’s [11] early warning on the46
inability of science to properly discuss problems of the life world. The present paper47
investigates this problem more closely in a specific task of visual control of driving48
in a bend. First, the differences between the rational agent model (including its49
representations and associated (kinetic and kinematic) variables) adapted in robotics50
and the use of invariants (parameters) in human perceptual control processes are51
contrasted. Second, limitations in our understanding of visual control of driving will52
be scrutinized in a curve negotiation task. Third, the challenge of understanding the53
reasons behind the strange seemingly distorted visual world of humans is discussed.54
These include various aspects of age, gender differences and individual differences55
in drivers’ performance. In sum, the paper warns about the theoretical and ethical56
consequences of our limited understanding of human performance in car driving57
by discussing the existing gap between the principles of biological and artificial-58
engineering solutions in a specific driving task. Despite all the difficulties stemming59
from these limitations, the paper concludes with a positive note on the benefits of60
the robustness of human driving skills in developing mixed control systems.61
2 The Rational Agent Model and Its Limitations62
Modern science developed on the basis of mathematics and classical mechanics. For63
centuries, physics, that is classical mechanics, was the leading discipline in trying64
to understand Nature, but soon other disciplines followed physics by borrowing its65
principles and methods. Accordingly, early robots were designed by clockwork-66
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed to Bridge the Gap Between 57
driven smart mechanisms but remained mindless systems until language processing67
became available with the help of mathematics and computers. In other words, with68
advent of computers both mind and body became part of a mechanism and both69
are physical and computational. In the 1960s and 1970s, the cognitive revolution in70
psychology was the product of this development and created the illusion of human71
mind as a computer. Accordingly, the body of the agent and its surroundings are72
represented symbolically in the modular structure of the agent’s “mind”. Information73
about the body of the agent and its environment is processed based on sensory data,74
and movement plans are designed and executed based on the output of motor control75
(executive) modules.76
Artificial intelligence and robotic research adopted this cognitive architecture as77
evidenced by basic textbooks [26]. Some researchers, however, noticed that this78
cognitive architecture-based model of a rational agent is flawed. Brooks [2], for79
instance, noted that there is no need to create representations because the environ-80
ment can itself be used for computations. Others argued that Gibson’s [7,8] radical81
theory of perception should be used to eliminate representations. This approach had82
a few additional advantages including the close link between perception and action83
that could be adopted in robotic research [5]. Gibson’s theory [8] emphasizes the84
importance of the use of perceptual invariants (optic flow, horizon ratio, passabil-85
ity parameter, etc.) instead of relying on physical (kinetic kinematic) variables86
in behaviour control [23]. Some of these invariants have also been used in robotic87
research [5].88
3 The Problem of Curve Negotiation in Driving89
Car driving is an ever-increasing part of our routine activities in modern societies.90
Over the past few decades, we have witnessed a dramatic increase in various aspects91
of safety in car driving (e.g. road signs, speed limits, safety belt, safety bags, ABS,92
etc.). More recently, various modern technological innovations have contributed to93
safer driving (active headlight). The optimism in this field of research and technology94
led to the ambition of developing driverless cars. Major car manufacturers are trying95
to produce driverless cars or at least introduce some automatic driving mechanisms96
to assist drivers.97
Despite these promising developments, there are reasons for concern. For instance,98
it is still not clear what visual control strategies humans use in driving in complex99
traffic situations. Even the seemingly simple task of driving in a bend with proper100
steering at a safe speed is not fully understood. In particular, visual control in curve101
negotiation was and remained a challenging problem for researchers as our brief102
review of the existing models will demonstrate.103
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
58 E. E. Kadar
3.1 Cue-Based Approaches for Steering104
During the 1970s, the dominant cognitive (rational agent-based) paradigm was asso-105
ciated with traditional representational–computational models. Accordingly, drivers106
are assumed to prepare and execute a steering programme based on the estimate107
of road curvature (see Fig.1). Various possible cues were tested in the estimation108
of the road curvature, and the observed regularities in driver’s gaze were indicative109
of the use of these cues [3,24,25]. However, none of these experiments provided110
convincing evidence on the actual use of these postulated visual cues.111
3.2 Optic-Flow-Based Steering Control112
Failure of the cue-based steering models derived from static images led to the testing113
of the radically different alternative approach, the active dynamic theory of perception114
with optic flow theory [7,8]. Optic flow is shown to be a robust information source115
in control of linear locomotory tasks such as landing an aircraft and visual control116
of a car (driving and braking) in a straight road [8,17,19]. However, optic flow117
approaches remained difficult to be associated with gaze control for steering and118
speed in the bend [15,29](seeFig.2).119
3.3 Alternative Approaches for Steering Control120
To rectify the shortcomings of cue-based and optic flow-based approaches, alternative121
models were developed for steering control (see Fig.3). Land and Lee [16], for122
instance, proposed a steering model that was associated with drivers’ most typical123
Fig. 1 Two examples for visual cues to road curvature: athe β“visual angle” corresponds to the
road curvature. As the car travelled around the curve, a tangent was drawn along the far side of the
road where it crossed the straight-ahead position. The angle that this tangent made with the edge of
the straight road was taken as a measure of the tilt of the road; bthe apex of the inside contour could
be viewed as creating an angle (α), which can also be an informative cue about the road curvature
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed to Bridge the Gap Between 59
Fig. 2 Illustration of optic flow in driving: aduring rectilinear motion towards a target indicated
by red post on the horizon; and bduring a turn along the blue trajectory
Fig. 3 Three alternative steering models emerged in addition to cue-based and optic-flow-based
theories: atangent point tracking provides egocentric target angle that is the same as the angle of
curvature (θ); btracking the anticipated trajectory may have a similar link to tangent point direction
and curvature angle; cthe so-called two-point model steering can be developed from the trajectory
tracking model but too complex, and its use in human performance is highly implausible
gaze and tangent point tracking in the bend. Accordingly, egocentric gaze direction124
showed correlation with steering angle. The main advantage of this model is the125
direct link between gaze angle and steering angle in the band, but this model did not126
take into account the possible meaning of various other gaze control patterns such as127
looking at the outside contour and looking at the middle of the lane. Other models128
were similarly focusing on the control of steering (e.g. two-point model of steering129
by Salvucci and Gray [22]). One major problem in these classes of steering models130
is that they ignored that speed control is equally important aspect of driving. It is131
also known that humans are not good at perceiving constant speed [21]. Thus, this132
control of speed cannot be simply added to models. These concerns led to a renewed133
interest in optic flow theory, which could be used to explain control of both direction134
and speed.135
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
60 E. E. Kadar
3.4 Dynamic Models for Steering and Braking136
Research on detecting information in optic flow has, in general, been more promising137
than that on detecting cues from the retinal image or some other cues such as egocen-138
tric target direction. Several studies have shown, for instance, that the perception of139
heading can be based on optic flow properties and that this could be used to control140
steering [14,29,30]. Other studies have shown that the rate of optic expansion can141
be used to control driving speed [13,17]. Neuro-physiological studies with various142
animals have also shown that specific areas of the nervous system (e.g. MSTd and143
STPa) are sensitive to various types of optic flow patterns [1,6,9,27]. These studies144
mostly relied on rectilinear motion, but the problem of how optic flow is used for145
curve negotiation remained unresolved. A few studies have demonstrated that visual146
control of locomotion in rectilinear motion by steering and braking is associated147
with gaze direction towards the heading direction [19,20]. However, because gaze148
direction is associated with control of both braking and steering and there is a delay149
between gaze direction and change in control, it is hard to use gaze studies in support150
of a specific model.151
Despite the difficulties with optic flow, analysis of dynamics of gaze control152
seems to suggest that optic flow is the most likely candidate humans use because it153
is a robust information source for visual control strategies. Rogers [18] has shown154
that drivers actually use gaze for both steering and braking, but discerning speed155
control and steering control remains a challenge (see Figs. 4and 5). Speed control156
relies on peripheral vision, while direction control based on the centre of optic flow157
and relies primarily on fovea-based vision. Thus, a specific gaze direction could be158
associated with both direction control and speed control. But this is only one of the159
several difficulties researchers have to face.160
Fig. 4 Greyscale is used to indicate gaze distributions of 6 drivers in curve negotiation at a speed
of 20 kph with the scales of visual angles on both axes [18]. Darkest area covers the most frequent
gaze directions (see also Fig. 5for dynamics). aGaze distribution while driving in the inside lane;
bgaze distribution while driving in the outside lane
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed to Bridge the Gap Between 61
Fig. 5 Dynamic gaze patterns of a driver in curve negotiation (Scales: horizontal dimensions are
1/25 s, and the vertical dimensions are in degrees). aGaze patterns while driving in the inside lane
(the horizontal position is changing between 0and 5(apex and middle of road); bgaze patterns
while driving in the outside lane (horizontal gaze is more dramatically changing than in driving
on the inside lane) suggesting that direction control (about 10) and current direction for speed
control or controlling staying in the lane (about 0) are in conflict and making this task challenging
4 Challenges in Developing Optic-Flow-Based161
Self-driving Cars162
Based on the brief overview of the literature, the present paper argued in favour of163
using optic flow in self-driving cars. Optic flow use is not dependent on physical164
(kinetic and kinematic) variables, and there is evidence from neuroscience on the165
sensitivity to optic flow (rate of optic expansion—τ-strategy [17] and centre of optic166
flow). But the problem of two universes (one for human visual control based on167
invariants and another for robot control based on physical variables) remains a major168
concern. Engineers are keen on using kinetic and kinematic variables for control169
designs, despite the fact that humans are not using these dimensional variables.170
Arguably, optic flow could provide a common ground for mixed control system in171
developing self-driving cars. Although humans cannot rely on physical variables,172
robotic implementation can use physical variables as well as those invariants that are173
used by humans. Thus, human-like approach to self-driving and mixed control could174
be implemented in autopilot systems.175
At this stage, however, we do not have sufficient knowledge on driver’s perfor-176
mance in curve negotiation and we have to face several challenges:177
(a) Research typically overlooked the entry phase, which is very important to ensure178
that the vehicle enters the bend with sufficiently low speed, so the driver can rely179
on mostly steering in the bend because excessive braking in the bend could easily180
lead to skidding and loss of control. Also, excessive speed with steering could181
result in the turning over of the car turning.182
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
62 E. E. Kadar
(b) In the bend, gaze data do not provide clear-cut evidence on which aspects of183
optic flow is used (see Figs.4and 5). This is partly due to the above-mentioned184
dual processing (i.e. human vision can use peripheral and foveal information185
simultaneously).186
(c) Optic flow use is highly context-sensitive. For instance, asymmetric optic flow187
in the left and right visual fields due to asymmetrically cluttered environment188
results in asymmetries that can influence both direction and speed control [4].189
(d) Dynamic analysis of gaze patterns seems to suggest that the two invariants of190
optic flow (i.e. tau-strategy for speed control and centre of optic expansion for191
direction control) are sufficient, but they are not independent. The two types192
of control seem to interact, and their interaction is not well understood. More193
research is needed to better understand the two types of information associated194
with the control of direction and speed.195
(e) Human visual space seems to be distorted relative to real physical space. These196
distortions are likely to be related to the dynamics of locomotion and the context197
of the environment, but there are also individual differences in driver’s perception198
and performance. These distortions are difficult to visualize, but some artists199
have attempted to depict human visual space (see Fig.6and compare Cezanne’s200
depiction of a bend with the photographic image of the same scene [12]).201
(f) Level of expertise [28], gender [10] and various types of individual differences202
[18] are also important in driving including the speed drivers feel comfortable203
in curve negotiation.204
In sum, there are two universes robotic research has to deal with in developing self-205
driving cars (and, in general, automatic control systems for mixed control in human206
interaction). All artificial control mechanisms are focusing on strategies based on207
physical variables (dimensional numbers with distance, mass, etc., that are difficult208
to perceive for humans), and human perceptual control techniques mostly rely on per-209
ceptual invariants (invariants, dimensionless parameters, the so-called π-numbers),210
which are marginalized in theories of artificial control mechanisms. Despite the com-211
Fig. 6 Outline drawings (for copyright reasons) of Cezanne’s Road to Gardanne (a) and the pho-
tograph (b) of the motif. Please note the enhanced view and contrast with the photographic image,
which tends to shrink (make everything seem more distal) the elements of a scene
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed to Bridge the Gap Between 63
plexity of car driving tasks (e.g. curve negotiation), the difference between these two212
different control strategies can be reconciled and it is possible to make artificial car-213
control systems more familiar for human drivers if optic-flow-based strategy is used214
for the two basic tasks (control of direction and speed). However, we need a much215
better understanding of the performance of human drivers to be able to adapt their216
natural invariant-based (optic-flow-based steering and speed control) strategies in217
self-driving cars. Currently, a huge part of research is conducted in the laboratories218
of car manufacturing companies and their competition is preventing open discus-219
sion of the strategies/models various companies are trying to implement. This is an220
additional concern that cannot be ignored in safety and ethical issues in designing221
self-driving cars.222
5 Conclusions223
The present study investigated the differences between the scientific theories of life224
world and methods and designing principles of artificial systems (e.g. self-driving225
cars). Although nearly a century ago Husserl [11] already warned scientists about226
the limitations of modern sciences to deal with the problems of the life world, his227
insights are still not fully appreciated. Scientific models of human behaviour are still228
based on unjustified assumptions and robotic engineers are misguided when they are229
trying to adopt these models in developing human-like behaviour control strategies230
for autonomous artificial agents.231
Some of the false assumptions of theories of human behaviour and their appli-232
cations in robotics were discussed. Specifically, the computational–representational233
approach with the metaphor of “mind as a computer” was critically assessed. In234
robotics, representation of the environment and modular architecture (sensors, actu-235
ators, operating system, memory, etc.) are adapted from cognitive theories of human236
behaviour. Various aspects of this approach have already been criticized (e.g. no need237
for representation [2], no need for separating vision and movement control [7], an238
alternative theory, the so-called ecological approach was proposed in robotics [5]).239
Nevertheless, there are still important aspects that are typically ignored even in these240
attempts. Two of these overlooked, but closely related issues were presented in this241
paper. First, humans are typically not using those physical variables (e.g. distance,242
time, speed, momentum, force, etc.) that constitute the fundamental measures for243
scientists in various models of the control of human behaviour. Gibsonian so-called244
ecological approaches seem to suggest that a variety of perceptual invariants are used245
in human behaviour control [23]. Research into those invariants mostly limited to one246
modality and one task (affordance, steering or braking) only. However, in everyday247
settings, these invariants are interacting in a complex fashion that research has not248
even considered so far. Second, numerous scientific studies indicated that our life249
world is radically different than the physical space–time world. The difference is250
evidenced by the fact that its perception is distorted in a complex and strange way251
including various aspects of individual differences. The complex interaction of the252
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
64 E. E. Kadar
invariants could be the key to explain the strange distorted space–time universe of253
the life world of humans. These distortions are hard to demonstrate, but some artists,254
including Cezanne, could provide some hints on why robot vision based on video255
images are non-starters to understand human visual space.256
The simple example of visual control during curve negotiation was complex257
enough to demonstrate those differences between the life world and the physical258
world. These discrepancies provide opportunities for errors with serious safety and259
ethical concerns in everyday interaction between humans and autonomous con-260
trol systems. In developing self-driving cars, the safety concerns and ethical con-261
sequences of these differences are expected to be more dramatic because of the262
complexity everyday traffic situations drivers have to deal with. Nevertheless, most263
human drivers use redundant and robust strategies with large enough safety mar-264
gin. For instance, [24] noted that drivers’ speed in a bend is about 20–30% below265
the maximum speed their car can negotiate the bend. This margin of safety would266
allow drivers to get used to a potentially less robust automatic control mechanism267
car manufacturing companies would implement in designing self-driving cars.268
References269
1. Anderson K, Siegal R (1999) Optic flow selectivity in the anterior superior temporal polysen-270
sory area, STPa, of the behaving monkey. J Neurosci 19:2681–2692271
2. Brooks R (1991) Intelligence without representation. Artif Intell 47(1):139–159272
3. Donges E (1978) A two-level model of steering behaviour. Hum Factors 20(6):691–707273
4. Duchon A, Warren W (2002) A visual equalization strategy for locomotor control: of honeybees,274
robots, and humans. Psychological Science pp 272–278275
5. Duchon A, Warren W, Kaelbling L (1998) Ecological robotics. Adapt Behav 6:473–507276
6. Duffy C, Wurtz R (1997) Response of monkey MST neurons to optic flow stimuli with shifted277
centres of motion. J Neurosci 15:5192–5208278
7. Gibson J (1966) The senses considered as perceptual systems. Houghton Mifflin, Boston279
8. Gibson J (1986) The ecological approach to visual perception. New Jersey: Lawrence Erlbaum280
Associates, Original work published 1979281
9. Graziano M, Anderson R, Snowden R (1994) Tuning of MST neurons to spiral motions. J282
Neurosci pp 54–67283
10. Hodges B (2007) Values define fields: the intentional dynamics of driving, carrying, leading,284
negotiating, and conversing. Ecol Psychol 19:153–178285
11. Husserl (1970) The crisis of European sciences and transcendental phenomenology (D. Carr,286
Trans.). Northwestern University Press, Evanston. Original work published 1936287
12. Kadar E, Effken J (2008) Paintings as architectural space: “Guided Tours” by Cezanne and288
Hokusai. Ecol Psychol 20:299–327289
13. Kaiser M, Mowafy L (1993) Optical specification of time-to-passage: observers’ sensitivity to290
global tau. J Exp Psychol: Hum Percept Perform 19(5):1028–1040291
14. Kim N, Turvey M (1999) Eye movement and a rule for perceiving direction of heading. Ecol292
Psychol 11(3):233–248293
15. Kim N, Fajen B, Turvey M (2000) Perceiving circular heading in noncanonical flow fields. J294
Exp Psychol: Hum Percept Perform 26(5):31–56295
16. Land M, Lee D (1994) Where we look when we steer. Nature 369:742–744296
17. Lee D (1976) A theory of visual control of braking based of information about time-to-collision.297
Perception 5:437–459298
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Mind the Gap: A Theory Is Needed to Bridge the Gap Between 65
18. Rogers S (2003) Looking where you intend to go: Gaze patterns in basic driving tasks. PhD
299
thesis, Department of Psychology, Unpublished PhD Dissertation300
19. Rogers S, Kadar E, Costall A (2005a) Drivers’ gaze patterns in braking from three different301
approaches to a crash barrier. Ecol Psychol 17:39–53302
20. Rogers S, Kadar E, Costall A (2005b) Gaze patterns in visual control of straight-road driving303
and braking as a function of speed and expertise. Ecol Psychol 17:19–38304
21. Runeson S (1974) Constant velocity: not perceived as such. Psychol Res 37(1):3–23305
22. Salvucci D, Gray R (2004) A two-point visual control model of steering. Perception 33:1233–306
1248307
23. Shaw R, Flascher O, Kadar E (1995) Dimensionless invariants for intentional systems: ’mea-308
suring the fit of vehicular activities to environmental layout. In: Flach J, Hancock P, Caird309
J, Vicente, K (Eds) Global perspectives on the ecology of human-machine systems, vol 1.310
Lawrence Erlbaum Associates, Inc., Hillsdale, NJ, pp 293–357311
24. Shinar D (1978) Psychology on the road. Wiley, New York312
25. Shinar D, McDowell E, Rockwell T (1977) Eye movements in curve negotiation. Hum Factors313
19(1):63–71314
26. Siciliano B, Khatib O (2016) Springer handbook of robotics. Springer315
27. Siegal R, Read H (1997) Analysis of optic flow in the monkey parietal area 7a. Cereb Cortex316
7:327–346317
28. Spackman K, Tan S (1993) When the turning gets tough... New Scientist, pp 28–31318
29. Wann J, Land M (2000) Steering with or without the flow: is the retrieval of heading necessary?319
Trends Cogn Sci 4:319–324320
30. Wann J, Swapp D (2000) Why you should look where you are going. Nat Neurosci 3(7):647–321
648322
462563_1_En_6_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 65 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Who Should You Sue When No-One Is Behind the Wheel? Difficulties in Establishing New Norms for
Autonomous Vehicles in the European Union
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Musielewicz
Particle
Given Name Michael P.
Prefix
Suffix
Role
Division
Organization John Paul II Catholic University of Lublin
Address Lublin, Poland
Email michael.musielewicz@kul.pl
Abstract Recent technological advances in autonomous vehicles have brought their introduction to commercial
markets into the near future. However, before they hit the sales lots, various governments and inter-
governmental governing structures have taken interest in laying down a regulatory framework prior to their
introduction into the markets. One regulatory institution looking at this issue is the European Union. In a
2016 report, by the Policy Department of the European Parliament, it was noted that there is a lack of
harmonization in liability rules within the European Union. This problem was also addressed in a press
release in 2017. The goal of this essay is to provide a sketch of the problems related to liability and its legal
framework as found within the European Union and to examine one solution (among others) currently
under examination by officials in the EU, that is the possibility of legal personhood for autonomous
vehicles. I will first concur the current regulatory field is lacking, and then contrast the advantages and
disadvantages of such a scheme. To do this, I will first provide a brief overview of the liability regimes in
the European Union. Secondly, I will explore the sort of legal personhood and offer a critique of a current
EU document concerning this issue. Finally, I will pose some difficulties that sort of legal personhood has
when placed into the regulatory schemes.
Keywords
(separated by '-')
Liability in the European Union - Legal personhood - Autonomous vehicles
UNCORRECTED PROOF
Who Should You Sue When No-One Is
Behind the Wheel? Difficulties in
Establishing New Norms for Autonomous
Vehicles in the European Union
Michael P. Musielewicz
Abstract Recent technological advances in autonomous vehicles have brought their
1
introduction to commercial markets into the near future. However, before they hit2
the sales lots, various governments and inter-governmental governing structures have3
taken interest in laying down a regulatory framework prior to their introduction into4
the markets. One regulatory institution looking at this issue is the European Union.5
In a 2016 report, by the Policy Department of the European Parliament, it was noted6
that there is a lack of harmonization in liability rules within the European Union.7
This problem was also addressed in a press release in 2017. The goal of this essay8
is to provide a sketch of the problems related to liability and its legal framework9
as found within the European Union and to examine one solution (among others)10
currently under examination by officials in the EU, that is the possibility of legal11
personhood for autonomous vehicles. I will first concur the current regulatory field12
is lacking, and then contrast the advantages and disadvantages of such a scheme. To13
do this, I will first provide a brief overview of the liability regimes in the European14
Union. Secondly, I will explore the sort of legal personhood and offer a critique of15
a current EU document concerning this issue. Finally, I will pose some difficulties16
that sort of legal personhood has when placed into the regulatory schemes.17
Keywords Liability in the European Union ·Legal personhood ·Autonomous18
vehicles19
1 Introduction: An Emerging Issue Needing20
to Be Addressed21
While robots have been performing menial tasks for quite some time in sectors like22
manufacturing, there has been fairly limited exposure to robots for the vast majority23
of people. However, with developments in robotic caregivers and robotic drivers, i.e.,24
M. P. Musielewicz (B)
John Paul II Catholic University of Lublin, Lublin, Poland
e-mail: michael.musielewicz@kul.pl
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_7
67
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
68 M. P. Musielewicz
autonomous cars, we are rapidly approaching a time where they become a broader25
part of our daily life. As we approach this juncture, regulatory institutions like the26
European Union (EU) have taken it upon themselves to establish a legal framework27
for new interactions with these robots. Despite this attention, there is still a need28
for the harmonization of norms within this hierarchical institution. The goal of this29
essay is to provide a sketch of the current problems related to liability and its legal30
framework as found within the EU and shed some light on one theoretical solution31
currently under examination by legislative officials. This solution is the possibility32
of ascribing legal personhood to autonomous vehicles. To accomplish this, I will33
first provide a brief overview of the types of liability regimes in the European Union34
and their applicability to autonomous vehicles. Secondly, I will explore the notion35
of legal personhood and offer a critique of a current EU report concerning this issue.36
Finally, I will pose some difficulties that sort of legal personhood has when placed37
into the regulatory schemes.38
2 The Current Regulatory Field in the European Union39
There is a great difficulty in trying to capture the current regulation for liability40
in the European Union for autonomous vehicles. To begin, the EU, as such, is a41
rather difficult entity to describe, especially in terms of its normative systems. This42
is because the EU is something between a supra-national organization1and a state43
with quasi-sovereignty2and exists inside of a nexus of international treaties. In the44
system, the Union has supremacy in creating norms within the area of its competences45
(as granted by the founding treaties) and the various member states retain their46
competences in the other areas. As a result, a plethora of things are regulated in at47
any given time by different levels of the system and their regulations vary as they48
move between different national jurisdictions. One of these things covered by this49
diverse legislation is liability for damages caused to individuals. This results from50
the very nature of the European Union, which leads to a lack of a unified system for51
establishing liability for damages caused to people in general and much less so for52
robots and even more so for other autonomous systems.53
Cees van Dam in his book, European Tort Law [1, p. 9], notes that this difficulty54
is present from the very beginning as there is no common agreement on what is55
covered in tort law within the Union or even what term to use. This difference stems56
from a fundamental difference between nations that use the Common Law System57
in the Union3and those nations that use a Civil Law System.4He admits that a more58
accurate term would be, “extra-contractual liability law excluding agency without59
1Or perhaps international organization, though it seems to have more legislative power than typical
international organizations.
2Or some describe it as having shared or pooled sovereignty.
3England Ireland, Malta, Cyprus.
4France, Germany, Spain etc.
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Who Should You Sue When No-One Is Behind the Wheel? Difficulties 69
authority and unjust enrichment” but decides to use the term “tort” as it captures the60
essential meaning, and I have opted to use his terminology [1, p. 5]. Additionally, van61
Dam describes that there are currently five distinct ways of addressing torts within62
the European Union. While there is some overlap, these systems are quite unique63
and consist of two supra-national systems, and three types of national systems5and64
are as follows:65
1. Supra-national:66
a. European Union Law67
b. European Convention of Human Rights6
68
2. National:69
a. French70
b. English71
c. German72
These systems are further complicated by that lack of agreement on exactly what73
is covered by this sort of legislation and how to determine liability [1, pp. 9–10].74
Given the complexity of the system and the brevity of this essay, we will only be able75
to highlight the central features of these systems and their impact on liability for au-76
tonomous vehicles. Furthermore, I will exclude the European Convention of Human77
Rights, for it seems to primarily relate to the liability of states and is incorporated78
into the European Union.79
In its report on Civil Law in Robotics, the European Parliament’s committee on80
Legal affairs (or the JURI committee) gives a good survey of the current tort leg-81
islation pertinent to robots on the EU level. In the report, they break liability for82
damages caused by robots into applicable categories. The first category is damages83
that are caused by defects within the robot itself or for failures of the producer84
to properly inform users on the correct use of the robot. In this situation, Coun-85
cil Directive 85/374/EEC of 25 July 1985 could be applied. Secondly, the report86
also mentions that it is important to clearly establish which rules apply to mobile87
autonomous robots, viz. autonomous vehicles, and in particular which rules within88
Directive 2007/46/EC of the European Parliament and of the Council of 5 September89
2007 [2, p. 16]. In instances where fault does not lie with the producer but rather90
with the user, a regime of absolute liability7of the user of an autonomous vehi-91
cles could fall under an expansion of Directive 2009/103/EC of 16 September 200992
[1, p. 418].93
5Here the national types represent three “families” of legal systems.
6While incorporated into European Union law with the Treaty of Lisbon, it is important to note that
it is a legal document of the Council of Europe which is broader than the European Union.
7That is to say negligence is not a factor in the establishment of a tort.
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
70 M. P. Musielewicz
In addition to these norms concerning liability in the European Union, there are94
various national regimes to consider as well. As mentioned before, these systems are95
broadly speaking French, English, and German. To further complicate the matter,96
each system of laws deals with torts in a different way and have different means of97
establishing the liability in the tort of the alleged tortfeasor against the injured party.98
Van Dam in his book succinctly summarizes the difference as follows. French torts99
follow a strict liability regime and in exceptional cases follow fault based liability.100
The rules fall under the norms given in the Code civil, in particular stemming from101
articles §1382 through §1384. It is also predominately concerned with the injured102
party. This can be seen in opposition to English tort law. Here we find torts “which103
provide a remedy (e.g., damages) if something has gone wrong in a particular way”.104
Van Dam describes there is a multitude of torts but of particular interest, for our105
survey of this topic, is the tort of negligence. German Tort law is a combination of106
Bürgerlichs Gesetzbuch and judge-made rules needed to fill lacunae found therein.107
One such example of these rules is the Verkehrspflichten [1, pp. 18 19]. These108
rulings and regulations cover a wide variety of specific torts and is far more precise109
in its regulations than its civil law cousin, the French system.110
Turning to autonomous vehicles, the most applicable category of tort laws seem111
to be the tort laws related to movable objects and in particular of motor vehicles.112
Once again van Dam proves quite useful in capturing the similarities between these113
three systems. Of note he states:114
Liability for animals and motor vehicles does not generally rest on the owner of the movable115
object but on le gardien (France), der Halter (Germany), or the keeper England [1, p. 403].116
(his emphasis)117
Although there is agreement in who is liable in torts concerning vehicles, van Dam118
draws out notable dissimilarities between these systems that I will lay out in the119
following Table 1:120
These dissimilarities pose a problem for establishing how liability is to be estab-121
lished for autonomous vehicles in the EU; despite there being a standardization of122
how to remedy torts pursuant to Directive 2009/103/EC of 16 September 2009 [1,123
pp. 415 –416]. Who is the keeper and who is the driver of an autonomous vehicle?124
What sorts of proof would be needed to prove that the driver was negligent and how125
would you measure the driver’s conduct in the English system? To what degrees are126
Table 1 Liability schemes
Country France Germany England
Liability Absolute Strict Negligence
Trigger Accident Operation of vehicle Driver’s conduct
Contributory
negligence
Inexcusable faute Over age 10 sound
mind
Yes
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Who Should You Sue When No-One Is Behind the Wheel? Difficulties 71
the injured parties responsible?8To help address these issues, one proposal found127
within the European Union has been to grant autonomous vehicles legal personhood,128
thereby possibly allowing for the car itself to be a legal agent within the legal various129
systems within the EU [3].130
3 On Legal Personhood131
Today, there is often a great deal of confusion over the notion of personhood and in132
particular legal personhood. This confusion stems, in part, from its long and varied133
history and can be seen as expressed in both popular literature and the media. In134
particular, this is seen when people are quick to object to the existence of non-135
human persons.9In this section, we proved a sketch of this notion to help clarify this136
confusion. Returning to the issue at hand, in the report for the JURI Committee of137
the European Union on European Civil Law Rules in Robotics two notions of legal138
personhood are explored; the first rests upon the more colloquial use of the term139
person and claims that, “[t]raditionally, when assigning an entity legal personality,140
we seek to assimilate it to humankind” and in particular this is in respect to animals.141
The second is a more technical understanding of the notion of a legal person. The142
author states that while legal personality is granted to a human being as natural143
consequence of their being human, it is contrasted to the sort of legal personhood of144
non-humans that is the sort based on a legal fiction. To this end, the author notes that145
this sort of “legal person” always has a human being acting behind the scenes. Here,146
the author gives the recommendation that we don’t ascribe legal personality to robots147
as it would “tearing down the boundaries between man and machine, blurring the148
lines between the living and the inert, the human and the inhuman” [2, p. 16]. This149
second more technical objection contains two aspects that should be addressed in150
turn. The first aspect is that personhood should not blur the lines between the human151
and inhuman. The second aspect is that there needs to always be a human operating152
behind the scenes even in the case of the fictional sort of personality.153
My objection to the first aspect is rooted in the history the notion of personhood,154
which has a long history in theology and philosophy and is particularly found in155
metaphysics, ethics and - for our current purposes - legal theory. For the sake of156
brevity, we will only briefly address the historical aspects of this notion in order157
to frame this first objection. We begin our journey with its roots in antiquity and158
its significant developments since early medieval thought. As noted by JURI the159
report, citing Hobbes, it is an adaptation of persona or the sort of mask used by160
actors [2, p. 14]. In antiquity, we find two allegorical uses of the term person,the161
8To see the difference, van Damn notes that in the French system the injured party is at fault if they,
for example were trying to commit suicide. Whereas in the English system, the injured party often
needs to establish the driver’s negligence [1, p. 409].
9For example there is popular disdain for the notion of corporate personhood recently brought to
the forefront of our attention with cases like the United States’ Supreme Court case Citizens United
v. FE C.
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
72 M. P. Musielewicz
first is legal and the second theological. In its legal sense (here in the Roman legal162
tradition), the term corresponds to the caput or status and to rights or capacities163
respectively. The sort of persona ascribed to a particular man varies depending upon164
what light is being shed upon him [4, pp. 90–1]. Hence, a man can be a person with165
one set of rights and incapacities as pater familias but has a different personality as166
a holder of a public office [5, pp. 167–8]. Here, one’s legal personality was merely167
a mask worn depending upon one’s role under the law at a particular time and is168
succinctly surmised by unus homo sustinet plures personas.10 Its first adaptation into169
theological–philosophical thought and is related to clarifying trinitarian theology [6,170
p. 4]. The notion of personality was first used by Tertullian (ca. 155 c.240 AD)171
in his Adversus Praxean as a means of describing the three persons of God all the172
while maintaining there being only one God. This mode of explanation was only173
later adopted by the broader Church in 362 AD during the Council of Alexandria [6,174
p. 4]. It was however much later in the sixth century in Boethius’ works that we begin175
a deepening of this concept. In Boethius, we find his definition of person as being176
Persona est naturae rationabilis individua substantia.11 This notion of person then177
moves from theological contexts to ecclesiastical context and from there into legal178
political theory and is adapted for use in law in addition to bolster the emperor, kings,179
and corporations (broadly understood) culminating in the early modern era with the180
theoretical emergence of the modern state in works of Jean Bodin,12 Hobbes and in181
concrete practice with Westphalian Sovereignty in the mid-seventeenth century [8,182
9] along with the desacralization of the state and law with Pufendorf and Doneau183
among others [10,p.72].184
It is here that we pick up with modern legal theory. In his opus “The Pure Theory185
or Law” Hans Kelsen devotes a chapter to the notion of legal personhood. In this186
seminal work, Kelsen describes the relationship between the physical person and187
juristic person. Here, he is careful to circumscribe “person” in light of the notion of188
the legal subject. A legal subject is “he who is the subject of a legal obligation or189
aright”[11, p. 168] Kelsen further explains that by right does not mean the mere190
reflexive right but moreover the capacity to:191
the legal power to assert (by taking legal action) the fulfillment of a legal obligation, that is,192
the legal power to participate in the creation of a judicial decision constituting an individual193
norm by which the execution of a sanction as a reaction against the non-fulfillment of an194
obligation is ordered [11, p. 168]195
Kelsen further notes that, for the purposes of being a person in its legal sense, it exists196
separately from the physical human being and is dependent upon the legal structure197
within which it is found. This is so that the notion of legal person captures the fact198
that there are non-human legal persons, e.g., the EU, and humans who are not legal199
10One man sustains many persons.
11“A person is an individuated substance of a rational nature.” I think that it is important to mention
here that this definition was designed specifically to account for non-human entities, viz. God and
angels, in addition to humans entities [7]. Further justification of this definition would require
realistic metaphysics, which is far outside of the scope of this essay and so will not be addressed.
12cf. Les Six Livres de la République.
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Who Should You Sue When No-One Is Behind the Wheel? Difficulties 73
persons, i.e., slaves.13 In essence personhood makes any entity a legal agent/subject200
within a particular legal system.201
Further refinements in the notion of legal personhood can be found in more recent202
works and serve to accommodate the variety of legal persons within a given legal203
system. Chopra and White in their work, A Legal Theory for Autonomous Artificial204
Agents, note the general inequality between various legal subjects depending on their205
status. For example, within the set of natural persons, i.e., human beings with legal206
personality, we find that some legal subjects are empowered with the right to vote207
(the power being subject to other norms in the system). Furthermore, juristic persons208
- non-human legal persons - typically do not have the same rights as natural persons.209
So following the previous example, they cannot vote yet they can enter into contracts210
with other legal persons, e.g., an employment contract with a natural person. This211
contrast highlights a distinction made within the notion of legal person, namely212
that of there being a dependent and independent legal person. This dependant and213
independent personality has long roots in legal theory stemming all they way back214
to Roman Law, where in the class of “persons” we find those who are alieni juris215
and sui juis reflecting both sorts of personality respectively [5, p. 168]. Examples of216
the former include, children and the mentally deficient, animals, corporations, ships,217
temples, etc., while examples of the latter include natural persons of sound mind [12,218
p. 159].219
Having briefly covered the theory concerning legal personhood, we now return to220
the report drafted for the JURI Committee of the European Union. Does its objection221
to granting legal personhood hold? In regards to its first objection, that is that the222
tradition of granting legal personhood to a thing is made in an effort to assimilate223
it to humankind, it is not supported whatsoever by the historical development of224
the notion of legal personality. The report’s second objection, that there is always a225
human being acting behind the scenes of “non-human legal persons” to grant them226
life, is stronger although not all together insurmountable. Considering the mere fact227
that I am a human being does not necessarily entail that I am a person in the legal228
sense. Moreover, even if I am a legal person I need not be a legal person sui juris,viz.229
my status as an adult of sound mind, but I could be a person alieni juris, namely I230
am a dependent upon some other person. It is only when I operate within a particular231
legal system, as a legal subject - who is invested with a certain set of rights and232
obligations by that very legal system - that I am considered to be a legal person either233
sui juris or alieni juris depending.234
13Further examples of this can be found in the European Court of Human Rights also implicitly
maintains this distinction cf. S&P v Poland or in some cases humans who are brain dead and
artificially maintained on life support [12, p. 148].
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
74 M. P. Musielewicz
4 Is Legal Personhood for Robots a Solution?235
The preceding section drew to our attention the importance of recognizing the distinc-236
tion between the “world of facts” and the “world of norms”, or as Kelsen describes it237
the difference between an act (or series of acts) and its (their) legal meaning [11,p.2].238
The legal meaning of a certain act or the rights and obligations of a certain entity need239
not be obvious. Take, for example, the slaying of one man by another. If the context240
was a duel and duels are permitted, then the act is permissible; if, however, duels are241
not permitted, then the very same act would be considered murder. We are then left242
with a sort of dualism where we have brute facts residing in the “world of facts” and243
those facts may have a myriad of legal meanings dependent upon their placement in244
the“worldofnorms”[11, p. 2]. So if we accept the preceding argument that legal245
systems give rise to the existence of legal persons are ascribed upon our “world of246
facts” and that they need not be human beings, it would seem simple enough to247
ascribe personality to autonomous vehicles and thereby make them agents with in248
the scope of the law. However, to do such a move would require both justification249
and we would be left wondering how does this help to resolve our first question of250
“Who do you sue when no one is behind the wheel?”. The answer to these questions251
requires the work of jurists and can be formulated within the philosophy of law.252
By ascribing legal personality to autonomous vehicles, we would change how we253
can understand them within particular normative systems, and importantly it would254
allow us to make the AV a legal agent within a particular legal system. The AV would255
become the driver and would thereby have all (or some) of the obligations imposed256
upon drivers according to the law. But as I said in the previous section, personality257
itself is not all too informative and when we consider a legal subject, and in particular258
for autonomous vehicles, what need to ask about what sort of personhood should we259
grant them and how can we use it to solve who takes responsibility when something260
goes wrong? This question is addressed in various works including White and Chropa261
in the book A legal theory for autonomous artificial agents [12, p. 153] and Pagallo in262
The Laws of Robots: Crimes, Contracts, and Torts [13, p. 152] and hinges upon how263
we view the particular robot. Is an autonomous vehicle a mere tool for transportation264
like a car or is it more akin to an animal (which also can be used for transportation)265
like a horse? Does it reason more like a machine, an animal, a child, or even an adult?266
Artificial agents are unique in that the answers to these questions largely depends267
on what theory of agency you maintain and your conception of what norms are. The268
answers that White, Chopra, and Pagallo give implicitly rests upon a functionalist269
account of personality and upon an interest account of rights, which allows for them270
to incorporate non-traditional entities like self driving cars. These two accounts go271
hand in hand and require each other for intelligibility. But what are these accounts?272
The functionalist account of personality maintains that our considerations of273
whether or not a subject can be considered to be a person within a particular sys-274
tem of law depends on its capacity to fulfill certain functions and have interests in275
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Who Should You Sue When No-One Is Behind the Wheel? Difficulties 75
a particular right(s) within a specific domain.14 Here, they argue that if an artificial276
agent is capable of meeting these criteria then it can become a legal agent [12,p.17].277
The answer naturally depends upon the robot in question and requires analogous278
reasoning to determine. As it stands now, there is currently no robot that exists where279
it can reason like an animal, child or adults and so, for the time being, it would seem280
that we can set the question aside.281
Nevertheless, such considerations are not solely the purview of science fiction.282
Establishing theoretical foundations for how to place more advanced robots into out283
legal system becomes more poignant as we approach a time where they may be284
able to reason within very specific fields, and start to fulfill functionalist accounts of285
personality. If we adopt the functionalist account, then an autonomous vehicle could286
be a legal person qua driving (and in much the same way Coca-Cola is a legal person287
qua corporation). This seems tenable the more autonomous the AV becomes. As the288
AV approaches the fifth level of automation according to the SAE standard [14,p.289
9], more and more of driving is performed by the AV to the point that it has control290
over all functions and requires no supervision of the person using the vehicle. At291
these higher levels of automation, the system functions as the driver. By adopting292
the functionalist legal account of personality we are able to maintain that the AV293
can in fact be a legal person in respect to its function as a driver for its “owner”294
or keeper.15 That being said, it would seem that it should be the dependent form of295
legal personhood, that is the autonomous vehicle, acting as “the driver”, is dependent296
upon its owner, or “the keeper” in something reminiscent of an agent principle297
relationship as suggested by Chopra and White [12, pp. 18–25] or even a master 298
servant relationship [12, p. 128]. By doing this, we by no means diminish the liability299
for torts committed; instead there is a shift in the sort of tort law and legal doctrine300
(e.g. qui facit per alium, facit per se or respondeat superior) we use in determining301
liability in the instance of a tort.302
To highlight this, let us consider a simplified example. The keeper of an au-303
tonomous vehicle sends the vehicle to pick up his children from school and en route304
the car hits and injures a pedestrian. For the sake of simplicity, let us assume that305
there is a tort, and compensation needs to be paid. Now we must ask, who should306
pay? If we accept that the car act as an agent (in the capacity of being the driver)307
on the behalf of the keeper in this sort of agent principle relationship, then while308
the driver (that is the AV as the agent) committed the tort the keeper (the principle)309
is ultimately responsible for paying compensation of any torts caused by his agents310
actions when they are acting on his behalf (here to pick up the keeper’s children311
from school). An advantage for granting personhood is that we can add an added312
protection for users and manufactures of these autonomous vehicles for unintentional313
damages caused by said autonomous vehicle (which may prove all the more helpful
14This is opposed to a will theory of rights, which presupposed that the person is able to make
claims upon other persons.
15As an aside, it would arguably fulfill the requirement of articles 1 and 8 of the Vienna Convention
of the Rules of Traffic that all moving vehicles on roads must have a driver operating them.
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
76 M. P. Musielewicz
if it is capable of learning). Returning to our example if the pedestrian died, then the314
keeper could be protected from the criminal charges of manslaughter but may still315
be required to pay compensation for a wrongful death claim resulting from the tort.316
5 Conclusion317
In this paper, we have outlined the current state of affairs for how torts could be settled318
for autonomous vehicles, and a possible means of incorporating them within the319
current frameworks available that is to say legal personhood. I have also pointed at320
the gap that these devices reside in and the problems that that generates in establishing321
the liability of the user of the device. Here, I have augured that (a) legal personhood322
is possible for autonomous vehicles, (b) that it would not diminish the liability for323
the users of the vehicles and acknowledged that despite that more work needs to be324
done (both in terms of technology and theory) before this can happen. Although this325
solution is possible, it should be weighed against the disadvantages that introducing a326
new legal agent into the system would generate. One disadvantage is that the creation327
of a totally new legal subject within the system would add further elements to an328
already nebulous system of tort laws and traffic laws. In legal systems were persons329
need permission to drive would the autonomous vehicle need a driver’s licence if330
it had personality? Is the agent principle relationship enough to cover torts or331
do we need more specific laws? Does this relationship translate easily into other332
legal systems present in the European Union? While I do not have answers to these333
questions they will certainly need to be considered before ascribing personality to334
autonomous vehicles. Nevertheless, the current state of tort laws within the European335
Union does not quite fit what a autonomous vehicles are (being somewhere between336
an animal and a mere tool) and the lack of a unified system makes it even more337
difficult to assess how we should place these new agents within our world; yet338
granting them personality may still be a step in the right direction despite the work339
that needs to be done beforehand. For example, which rights and obligations should340
we grant them? How do we justify these grants? Would we have any duties to these341
new persons? Why or why not? And on what level of society would they reside?342
If we adopt personality as our solution how should its relationship to is owner look343
like? These questions should be considered in future works, and would be beneficial344
to AVs adoption in to our society.345
Acknowledgements This research was supported by the National Science Centre of Poland346
(BEETHOVEN, UMO-2014/15/G/HS1/04514).347
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Who Should You Sue When No-One Is Behind the Wheel? Difficulties 77
References348
1. van Dam C (2013) European tort law. Oxford University Press https://books.google.pl/books?349
id=EAuiQgAACAAJ350
2. Nevejans N (2016) European civil law rules in robotics. http://www.europarl.europa.eu/351
RegData/etudes/STUD/2016/571379/IPOL_STU%282016%29571379_EN.pdf. European
352
Union353
3. Room EPP (2017) Robots: legal affairs committee calls for eu-wide rules. Press Re-354
lease. http://www.europarl.europa.eu/news/en/press- room/20170110IPR57613/ robots-legal-355
affairs-committee-calls- for-eu- wide- rules356
4. Melville RD (1915) A manual of the principles of Roman law relating to persons, property,357
and obligations. W. Greeen & Son Ltd358
5. Campbell G (2008) A compendium of roman law founded on the institutes of justinian. The359
Lawbook Exchange, Ltd360
6. Brozek B (2017) The Troublesome ‘Person’. In: Kurki V, Pietrzykowski T (eds) Legal person-361
hood: animals, artificial intelligence and the Unborn. Springer, Cham, pp 3–14362
7. Heinemann BHSW (1918) Theological tractates and the consolation of philosophy. Harvard363
University Press. http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:2008.01.0677:364
loebline=pos=58365
8. von Gierke O (1922) Political theories of the middle age. Cambridge University Press, Cam-366
bridge367
9. Kantorowicz EH (1917) The King’s two bodies: a study in mediaeval political theology. Prince-368
tonUniversityPress,Princeton369
10. Kurki VA (2017) Why Things Can Hold Rights: reconceptualizing the Legal Person. In: Kurki370
VAJ, Pietrzykpwski T (eds) Legal personhood: animals, artificial intelligencce and the Unborn.371
Springer, Cham, pp 69–89372
11. Kelsen H (2005) Pure theory of law. The Lawbook Exchange373
12. Samar Chopra LFW (2011) A legal theory for autonomous artificial agents. University of374
Michigan Press, Ann Arbor375
13. Pagallo U (2013) The Laws of robots crimes, contracts, and torts, law, governance, and tech-376
nology series, vol 10. Springer, Berlin377
14. US Department of Transportation NHTSA (2016) Federal automated vehicle policy. Acceler-378
ating the next revolution in road safety, US Federal policy concerning AV379
462563_1_En_7_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 77 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Robotics, Big Data, Ethics and Data Protection: A Matter of Approach
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Fabiano
Particle
Given Name Nicola
Prefix
Suffix
Role
Division
Organization Studio Legale Fabiano
Address Rome, Italy
Email info@fabiano.law
Abstract In Europe, the protection of personal data is a fundamental right. Within this framework, the relationship
among robotics, Artificial Intelligence (AI), Machine Learning (ML), data protection and privacy has been
receiving particular attention, recently, being the most important topics related to data protection and
privacy those of Big Data, Internet of Things (IoT), Liability and Ethics. The present paper describes the
main legal issues related to privacy and data protection highlighting the relationship among Big Data,
Robotics, Ethics and data protection, trying to address the solution correctly through the European General
Data Protection Regulation (GDPR) principles.
Keywords
(separated by '-')
Robotics - Big Data - Ethics - Data protection
UNCORRECTED PROOF
Robotics, Big Data, Ethics and Data
Protection: A Matter of Approach
Nicola Fabiano
Abstract In Europe, the protection of personal data is a fundamental right. Within
1
this framework, the relationship among robotics, Artificial Intelligence (AI), Machine2
Learning (ML), data protection and privacy has been receiving particular atten-3
tion, recently, being the most important topics related to data protection and privacy4
those of Big Data, Internet of Things (IoT), Liability and Ethics. The present paper5
describes the main legal issues related to privacy and data protection highlighting the6
relationship among Big Data, Robotics, Ethics and data protection, trying to address7
the solution correctly through the European General Data Protection Regulation8
(GDPR) principles.9
Keywords Robotics ·Big Data ·Ethics ·Data protection10
1 The European Law on the Processing of Personal Data11
In Europe, the protection of natural persons in relation to the processing of personal12
data is a fundamental right. In fact, the Article 8 of the Charter of Fundamental Rights13
of the European Union (the ‘Charter’) [8] is related to the protection of natural persons14
in relation to the processing of personal data.1
15
1Article 8 Protection of personal data. (1) Everyone has the right to the protection of personal data
concerning him or her. (2) Such data must be processed fairly for specified purposes and on the basis
of the consent of the person concerned or some other legitimate basis laid down by law. Everyone
has the right of access to data which has been collected concerning him or her, and the right to
have it rectified. (3) Compliance with these rules shall be subject to control by an independent
authority.
N. Fabiano (B)
Studio Legale Fabiano, Rome, Italy
e-mail: info@fabiano.law
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_8
79
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
80 N. Fabiano
Furthermore, the Charter considers also the respect for private and family life2as16
a crucial aspect of privacy.17
Moreover, the Treaty on the Functioning of the European Union (TFEU) considers18
the right to the protection of personal data.3
19
This is the general legal framework, and the protection of personal data is under20
the Directive the Directive 95/46/EC [5].21
Nevertheless, in 2016 has been published the European Regulation number22
679/2016 that entered into force on 25 May 2016, but it shall apply from 25 May 201823
[7]. According to the Article 94, this Regulation will repeal the Directive 95/46/EC24
[5] with effects from 25 May 2018. Therefore, the Directive 95/46/CE will be appli-25
cable till 25 May 2018.26
The GDPR obviously mentions the Charter of Fundamental Rights of the Euro-27
pean Union in the first Whereas.4
28
The primary goal is to harmonize the legislation of each Member State: the29
GDPR will be directly applicable in each European State, avoiding possible con-30
fusion among the domestic law. The GDPR introduces numerous changes, such as31
the Data Protection Impact Assessment (DPIA), the Data Protection by Design and by32
Default (DPbDbD), the data breach notification, the Data Protection Officer (DPO),33
the very high administrative fines in respect of infringements of the Regulation and34
so on.35
Regarding the protection of personal data, apart from the before-mentioned36
GDPR, there is also the Directive 2002/58/EC [6] concerning the processing of37
personal data and the protection of privacy in the electronic communications. In fact,38
according to the Article 95 of the GDPR, there is a relationship with this Directive.5
39
The Directive 2002/58/CE has the aim to to ensure an equivalent level of pro-40
tection of fundamental rights and freedoms, and in particular the right to privacy,41
with respect to the processing of personal data in the electronic communication sec-42
tor and to ensure the free movement of such data and of electronic communication43
equipment and services in the Community’.6
44
In this legal panorama, it is clear that technology and law are not at the same level45
because the first one (technology) is always ahead than the second one (law). The46
2Article 7 Respect for private and family life. Everyone has the right to respect for his or her
private and family life, home and communications.
3Article 16(1) says: ‘Everyone has the right to the protection of personal data concerning them’.
4The protection of natural persons in relation to the processing of personal data is a fundamental
right. Article 8(1) of the Charter of Fundamental Rights of the European Union (the ‘Charter’) and
Article 16(1) of the Treaty on the Functioning of the European Union (TFEU) provide that everyone
has the right to the protection of personal data concerning him or her.
5The Article 95 says: ‘This Regulation shall not impose additional obligations on natural or legal
persons in relation to processing in connection with the provision of publicly available electronic
communications services in public communication networks in the Union in relation to matters
for which they are subject to specific obligations with the same objective set out in Directive
2002/58/EC’.
6Article 1.
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Robotics, Big Data, Ethics and Data Protection 81
actions on the part of the legislator always followed the technological solutions, and47
so the rules have to be able to consider the technology evolution.48
It is crucial to analyse the GDPR to be ready and comply with the new data protec-49
tion Regulation. In fact, the General Data Protection Regulation (GDPR) represents50
an innovative data protection law framework, because of several purposes on which51
is based.52
2 Robotics and Data Protection53
The relationship among robotics, Artificial Intelligence (AI), Machine Learning54
(ML), data protection and privacy has been receiving specific attention in the last55
times. These topics have been addressed in 2016 at the 38th International Confer-56
ence of Data Protection and Privacy Commissioners, carrying out a ‘Room document’57
titled Artificial Intelligence, Robotics, Privacy and Data Protection’.7Recently, the58
Information Commissioner’s Office (ICO)8carried out a discussion paper titled Big59
data, artificial intelligence, machine learning and data protection’[10].60
The most important topics related to data protection and privacy are Big Data,61
Internet of Things (IoT),Liability and Ethics.62
The Big Data topic is also related to the Internet of Things (IoT) phenomenon that63
makes to spring several applications in different sectors (Personal, Home, Vehicles,64
Enterprise, Industrial Internet) [14]. The IoT is a continuously evolving system that65
can be considered as an ecosystem. The fields of Big Data and Blockchain9are, really,66
the main emerging phenomena in the IoT ecosystem, but people paid attention more67
to the technical and security issues than the privacy and protection of personal data68
ones. Certainly, the security aspects are relevant to avoid or reduce the risks for data69
privacy. However, we cannot dismiss the right approach, according to the GDPR’s70
principles.71
The IoT ecosystem allows developing several applications for different sectors72
such as, in the last few years, the ‘smart’ one. In fact, we talk about smart city, smart73
grid, smart car, smart home, etc. In each of this field are developing applications that74
consent to interact objects among themselves, transferring information real time,75
processing Big Data.76
From a technical point of view, these applications have to be developed guaran-77
teeing a high-security level to avoid any alteration. As the technology develops, the78
attacks on the systems grow as well. However, we cannot dismiss the several threats79
on these systems. The IoT concept is broad, and it can also concern critical infras-80
tructure: what about on this crucial point? It is clear that the technological evolution81
7The document is available on the EDPS’s website here: https://edps.europa.eu/sites/ edp/files/
publication/16-10-19_marrakesh_ai_paper_en.pdf .
8the UK’s independent body set up to uphold information rights.
9The blockchain, better known regarding the bitcoin, was conceptualized by Satoshi Nakamoto
(Nakamoto n.d.)’ [2] in 2008.
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
82 N. Fabiano
is a value, but at the same time, it is important to prevent any fraud attempt both82
using high-security measures, privacy and protection of personal solutions.83
2.1 Big Data and Data Protection84
Big Data has been defined by Gartner10 as follows: Big Data is high-volume, high-85
velocity and/or high-variety information assets that demand cost-effective, innovative86
forms of information processing that enable enhanced insight, decision making, and87
process automation’.88
Thus, Big Data is a phenomenon that consists of the fast and exponential data89
growth and data traffic, requires data analysis and data mining procedures. Hence,90
Big Data implies high values (it is well-known the Four V’s of Big Data: Volume,91
Velocity, Variety and Veracity—IBM [11]—but considering data as a value it is92
possible to extend the approach to Five V, last V as a “Value”). It is very simple93
to develop applications either, by having access to data, can execute data mining94
activities with every imaginable consequence. In this context, the main goal is to95
protect personal data because of their highest value.96
Nowadays, we are witnessing growing interest in fast Internet evolution and now,97
more and more often, we are hearing about Big Data, Artificial Intelligence (AI) and98
Machine Learning (ML). What about? Indeed, AI and ML are two different topics99
but strictly related between them.100
The main topic is the rational agent that is is one that acts so as to achieve the101
best outcome or, when there is uncertainty, the best expected outcome’[13].11
102
Furthermore, according to Mitchell [12]103
Machine Learning is a natural outgrowth of the intersection of Computer Science and Statis-104
tics ... Whereas Computer Science has focused primarily on how to manually program105
computers, Machine Learning focuses on the question of how to get computers to program106
themselves (from experience plus some initial structure). Whereas Statistics has focused107
primarily on what conclusions can be inferred from data, Machine Learning incorporates108
additional questions about what computational architectures and algorithms can be used to109
most effectively capture, store, index, retrieve and merge these data, how multiple learning110
subtasks can be orchestrated in a larger system, and questions of computational tractability.111
Having said this, it is certainly clear that these topics concern the computer science112
area. However, as the insiders will certainly agree, it is amazing the distorted context113
existing on the Web about AI, because is enough to read the articles and contributes114
that are on the Internet to have an idea such phenomenon. Searching on the Web, it115
is possible to find a lot of resources about AI, like if it represents the discover of the116
century. By this way, it might seem that the Artificial Intelligence (AI) is a current117
10Gartner IT glossary Big data. http://www.gartner.com/ it-glossary/ big-data (Accessed
21/08/2017).
11‘Intelligence is concerned mainly with rational action. Ideally, an intelligent agent takes the best
possible action in a situation’, 30.
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Robotics, Big Data, Ethics and Data Protection 83
discovery, even the 2017 news. Indeed, this is a very restrictive approach to describe118
and present the topic, because who deals with the computer science really knows that119
it is not so.12 Hence, due to the technological progress and especially to the societal120
evolution, AI and machine learning have been viewed as innovative resources for the121
future developing.122
Generally speaking, data is collected, stored and used: what about the process-123
ing of personal data? From a legal perspective it is mandatory to comply with the124
GDPR principles according to the Article 5 and specifically: Awfulness, fairness125
and transparency (5.1a), Purpose limitation (5.1b), Data minimization (5.1c),126
Accuracy (5.1d), Storage limitation (5.1e), Integrity and confidentiality (5.1f),127
Accountability (5.2).128
Moreover, we cannot dismiss the ‘data subject’s consent’ (Article 7) and security129
(Article 32).130
Someone [15] argues, despite the before-mentioned principles, that the GDPR is131
incompatible with Big Data and there is the needing to implement it.13
132
2.2 Ethics, Data Protection and Privacy133
Data collected and used implies also an ethical approach to Robotics, Artificial134
Intelligence, Big Data and IoT ecosystem. Generally speaking ethics could appear135
an unimportant topic but, instead, it is a very important aspect, especially talking136
about data protection and privacy.137
The European Data Protection Supervisor (EDPS) carried out the Opinion 4/2015138
[4]. In this Opinion the EDPS, talking about Big Data, highlighted the tracking online139
activity.14
140
On the same point is the ICO in the before-mentioned discussion paper [10] where141
there are specific statements to Ethics.142
In Europe, it possible to address any matters related to Ethics and Robotics143
(included Big Data, AI, IoT, ML) through the GDPR. Outside Europe, instead,144
12Mitchell [12], 16, ‘The first work that is now generally recognized as AI was done by Warren
McCulloch and Walter Pitts (1943)’.
13‘Yet, the scenario that the GDPR’s incompatibility will lead to an impact that would be both
negative and substantial must be taken under serious consideration. While the EU’s strong position
towards the protection of privacy rights is admirable, it is possible that the full implications the
GDPR will have for the important Big Data practices, and their benefits, have not been fully and
properly considered. Therefore, the opinions here noted must be kept in mind as this new Regulation
moves towards enactment and implementation’.
14We read: ‘Such “big data” should be considered personal even where anonymization techniques
have been applied: it is becoming ever easier to infer a person’s identity by combining allegedly
“anonymous” data with other datasets including publicly available information, for example, on
social media. Where that data is traded, especially across borders and jurisdictions, accountability
for processing the information becomes nebulous and difficult to ascertain or enforce under data
protection law, particularly in the absence of any international standards.’
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
84 N. Fabiano
because of the lack of international ethical standard, the matter should be addressed145
through policies or other contractual solution.146
The interest about Ethics is growing so much that industries and public bodies147
are paying attention to this topic with policies and initiatives to highlight how to148
address the ethical dimension correctly. This scenario demonstrates that Ethics is an149
emerging profile related to Big Data, data protection and privacy and the awareness150
about it. To raise the awareness on Ethics is undoubtedly a significant step towards151
the right approach.152
The GDPR proposes (Article 32) some security solutions to protect personal data153
and manage the risks. Apart from the possible solutions (inter alia, pseudonymisation154
and encryption of personal data), the ethical focal point is to protect personal data155
guaranteeing the dignity of each natural person. In Europe, as the EDPS clarified, it156
does not exists a legal protection for dignity as a fundamental right, but it shall be157
derived from the data protection legal framework and specifically from the GDPR. It158
needs an ethical approach, not only theorized and developed by Public Bodies (such159
as the European Ethics Advisory Board) but mainly practised by the private sector.160
The principles provided for in Article 5 of the GDPR are the primary references161
for Ethics, but we cannot dismiss the other rules of the same Regulation. The risk162
management requires the necessary referring to the GDPR’s rules.163
Hence, one ethical aspect is transparency, considering data protection and privacy164
as a value and not as a mere cost. Industries and organizations, often, seem to have a165
wrong approach to privacy and data protection, evaluating them only as a cost. Data166
protection and privacy are, indeed, ‘processes’ and their assessment to comply with167
the law is the right way to address them.168
The data subject must be in the centre of the data processing, considering him/her169
rights and the power to control his/her personal data. The main matter, thus, is that170
individuals must have the full control of their personal data. Some ethical issues171
emerge from the use of personal data by industries or organizations. It would be172
desirable to consider a business ethics approach to process personal data correctly,173
according to the GDPR (or, in general, the laws). It is evident that some ethical rules174
can be provided by the law, but in certain cases, they might result in policies or175
agreements.176
We know that the GDPR concerns the protection of personal data in Europe and177
one issue is related to the processing outside Europe. The GDPR jurisdiction could178
be a limit for any business from or outside Europe; in this case, can supply policies179
or agreements as said.180
2.3 Data Protection by Design and by Default181
Apart from the reference to the GDPR principles as shown, there is another funda-182
mental key provided for Article 25 that is Data Protection by Design and by Default183
(DPbDbD) and specifically, the paragraph 1 is related to the Data Protection by184
Design, whereas the Data Protection by Default in paragraph 2. In October 2010,185
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Robotics, Big Data, Ethics and Data Protection 85
the 32nd International Conference of Data Protection and Privacy Commissioners186
adopted a resolution on Privacy by Design (PbD) [9] that is a landmark and repre-187
sents a turning point for the future of privacy. This Resolution proposes the following188
seven foundational principles [1]: Proactive not Reactive; Preventative not Reme-189
dial,Privacy as the default,Privacy Embedded into Design,Full Functional-190
ity: Positive-sum, not Zero-sum,End-to-end Lifecycle protection,Visibility and191
transparency,Respect for user privacy.192
The main goal is to draw up two concepts: (a) data protection and (b) user. To193
develop an effective data protection and privacy approach, we must start any process194
with the user—the person who has to be protected—putting him or her at the centre.195
This means that during the design process, the organization always has to be thinking196
of how it will protect the users privacy. By making the user the starting point in197
developing any project (or process), we realize a PbD approach.198
The European Data Protection Supervisor (EDPS) has promoted PbD, touting the199
concept in its March 2010 Opinion of the European Data Protection Supervisor15 on200
Promoting Trust in the Information Society by Fostering Data Protection and Privacy201
[3]. It was not long after this endorsement that the 32nd International Conference of202
Data Protection and Privacy Commissioners adopted the PbD concept as well.203
In the EU Regulation 679/2016, this approach became Data Protection by Design204
and by Default (DPbDabD). Between ‘Privacy by Design’ (PbD) and ‘Data Pro-205
tection by Design and by Default’ there are differences in term of methodological206
approach, but the main goal is to highlight how it needs to start from the user in any207
privacy project to protect him/her.208
According to the Article 25, hence, it is possible to address each project correctly,209
applying these rules. In fact, the Article 25(1) says that the controller shall, both at210
the time of the determination of the means for processing and at the time of the pro-211
cessing itself, implement appropriate technical and organizational measures, such212
as pseudonymisation, which are designed to implement data-protection principles,213
such as data minimization, in an effective manner and to integrate the necessary214
safeguards into the processing in order to meet the requirements of this Regula-215
tion and protect the rights of data subjects’. According to this rule, it is relevant216
to pay attention to set up appropriate technical and organizational measures. The217
pseudonymisation method is one of the possible actions to use for achieving the goal218
of integrating the necessary safeguards to protect the rights of data subjects into the219
processing. Moreover, according to the Article 25(2) the controller shall implement220
appropriate technical and organizational measures for ensuring that, by default, only221
personal data which are necessary for each specific purpose of the processing are222
processed’. Apart from data protection and privacy laws, it is also recommended,223
in the design phase, to address the privacy by design and by default principles cor-224
rectly, evaluating the use of technical standards, such as ISO/IEC 27001 or ISO/IEC225
27021 or others similar resources. In this way, we could adopt a complete approach226
according to all the resources, legislative and technical references; it is an excellent227
method to achieve the goal of a full integration among all the resources.228
15We read ‘a key tool for generating individual trust in ICT’.
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
86 N. Fabiano
3 Conclusions229
This contribution is founded on a legal approach to demonstrate as it is possible to230
comply with ‘robotics and ethics’ principles and address them according to the laws,231
opinions and technical rules. Moreover, the data protection and privacy domains have232
taken on particular importance because of the relevance of the data subject’s rights233
according to the GDPR. It appears that there is a close relationship between the234
topics related to the domain ‘robotics and ethics’ and data protection one. We cannot235
disregard the data subject’s rights in each project, especially during the design phase,236
applying for the data protection by design and by default principles. Therefore, in237
each ‘robotics and ethics’ project, during the design phase, considerations should be238
given to the data protection principles and the possible consequences for the data239
subject. It is quite relevant to adopt any measure, security or organizational ones,240
to reduce risks. According to this approach, it is possible to address correctly any241
project.242
References243
1. Cavoukian A (2010) 7 Foundational principles. Privacy by design. https://www.ipc.on.ca/wp-244
content/uploads/Resources/ 7foundationalprinciples.pdf245
2. Dieterle D (2017) Economics: the definitive encyclopedia from theory to practice, vol 4. Green-246
wood247
3. EDPS E (2010) Opinion of the European Data Protection Supervisor on promoting trust in the248
information society by fostering data protection and privacy. https://edps.europa.eu/sites/edp/249
files/publication/10- 03-19_trust_information_society_en.pdf250
4. EDPS E (2015) Opinion 4/2015—towards a new digital ethics. Data, dignity and technology.251
https://edps.europa.eu/sites/ edp/files/publication/15- 09-11_data_ethics_en.pdf
252
5. European Parliament (1995) Directive 95/46/EC of the European Parliament and of the council253
of 24 October 1995 on the protection of individuals with regard to the processing of personal254
data and on the free movement of such data. http://eur-lex.europa.eu/legal-content/EN/TXT/255
PDF/?uri=CELEX:31995L0046&from=EN256
6. European Parliament (2002) Directive 2002/58/EC of the European Parliament and of the257
council of 12 July 2002 concerning the processing of personal data and the protection of pri-258
vacy in the electronic communications. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?259
uri=CELEX:32002L0058&from=en260
7. European Parliament (2016) Regulation (EU) 2016/679 of the European Parliament and of the261
council of 27 April 2016 on the protection of natural persons with regard to the processing262
of personal data and on the free movement of such data, and repealing Directive 95/46/EC263
(General Data Protection Regulation). http://eur-lex.europa.eu/legal-content/EN/ TXT/PDF/?264
uri=CELEX:32016R0679&from=EN265
8. European Union (2012) Charter of fundamental rights of the European Union. http://eur-lex.266
europa.eu/legal-content/ EN/TXT/PDF/?uri=CELEX:12012P/TXT&from=EN
267
9. ICDPP Commissioners (2010) Resolution on privacy by design. In: Proceedings of268
32nd international conference of data protection and privacy commissioners, 27–29269
October, Jerusalem, https://edps.europa.eu/sites/edp/ files/publication/10-10- 27_jerusalem_270
resolutionon_privacybydesign_en.pdf
271
10. ICO (2017) Big data, artificial intelligence, machine learning and data protection. https://ico.272
org.uk/media/for-organisations/documents/2013559/big-data- ai-ml-and- data-protection.pdf273
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Robotics, Big Data, Ethics and Data Protection 87
11. Marr B (2015) Why only one of the 5 Vs of big data really matters. http://www.ibmbigdatahub.
274
com/blog/why- only-one-5- vs-big-data- really-matters275
12. Mitchell T (2006) The discipline of machine learning. http://www.cs.cmu.edu/~tom/pubs/276
MachineLearning.pdf277
13. Russell S, Norvig P (2010) Understanding policy-based networking. Pearson278
14. Turck M (2016) What’s the big data? Internet of things market landscape. https://279
whatsthebigdata.com/2016/08/ 03/internet- of-things-market- landscape/280
15. Zarsky T (2017) Incompatible: the GDPR in the age of big data. http://scholarship.shu.edu/281
cgi/viewcontent.cgi?article=1606&context=shlr282
462563_1_En_8_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 87 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title The Concept of Friendliness in Robotics: Ethical Challenges
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Aldinhas Ferreira
Particle
Given Name Maria Isabel
Prefix
Suffix
Role
Division
Organization Faculdade de Letras da Universidade de Lisboa, Centro de Filosofia da
Universidade de Lisboa
Address Lisbon, Portugal
Division Instituto Superior Técnico, Institute for Systems and Robotics
Organization University of Lisbon
Address Lisbon, Portugal
Email isabel.ferreira@letras.ulisboa.pt
Abstract Socially interactive robots differentiate from most other technologies in that they are embodied,
autonomous, and mobile technologies capable of navigating, sensing, and interacting in social
environments in a human-like way. By displaying behaviors that people identify as sentient such as
showing to recognize people’s faces, making eye contact, and responding socially exhibiting emotions,
robots create the illusion of interaction with a living being capable of affective reciprocity. The present
paper discusses the ethical issues emerging from this context by analyzing the concept of [friendliness].
Keywords
(separated by '-')
Social robots - Empathy - Affective behavior - Friendliness - Deception
UNCORRECTED PROOF
The Concept of Friendliness in Robotics:
Ethical Challenges
Maria Isabel Aldinhas Ferreira
“Men have no more time to understand anything.
They buy things all readymade at the shops.
But there is no shop anywhere where one can buy friendship […]”
—Antoine de Saint-Exupéry, The Little Prince
Abstract Socially interactive robots differentiate from most other technologies in1
that they are embodied, autonomous, and mobile technologies capable of navigating,2
sensing, and interacting in social environments in a human-like way. By displaying3
behaviors that people identify as sentient such as showing to recognize people’s faces,4
making eye contact, and responding socially exhibiting emotions, robots create the5
illusion of interaction with a living being capable of affective reciprocity. The present6
paper discusses the ethical issues emerging from this context by analyzing the concept7
of [friendliness].8
Keywords Social robots ·Empathy ·Affective behavior ·Friendliness ·9
Deception10
1 Technological Artifacts11
Socially interactive robots will soon populate every domain of existence as their abil-12
ities progressively become technically feasible for application in real-life contexts.13
M. I. Aldinhas Ferreira (B)
Faculdade de Letras da Universidade de Lisboa, Centro de Filosofia da Universidade de Lisboa,
Lisbon, Portugal
e-mail: isabel.ferreira@letras.ulisboa.pt
M. I. Aldinhas Ferreira
Instituto Superior Técnico, Institute for Systems and Robotics,
University of Lisbon, Lisbon, Portugal
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_9
89
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
90 M. I. A. Ferreira
As it happens with all the other artifacts, technological artifacts—as robots are—14
emerge in specific economic, social, and cultural frameworks and are a consequence15
of a process of evolution, stemming from the accumulated experience and knowledge16
of precedent generations. Ultimately, they aim to promote human well-being, pro-17
viding solutions to the community’s specific problems, needs, answering particular18
expectations.19
When a new technological artifact is born, it endures a short period of public trial,20
sometimes even of non-trust by the members of society. It starts interacting with21
consumers as these begin using it, sometimes cautiously, adapting to it. If it proves22
to be safe, works fine, and is useful, it gets trendy, eventually fashionable, it is sold23
by thousands, and it is massively incorporated, becoming part of the typical routines24
and behaviors of millions. Some authors [3] refer to this process where technology25
is integrated into the organizational structure, daily routines, and values of the users26
and their environments as domesticating technology, in an analogy to the process that27
takes place when a new pet enters the household and learns its rules. On the other28
hand, authors as [19] stress the role new technology plays shaping or reshaping peo-29
ple’s way of living: “when technologies are used, they help to shape the context on30
which they fulfill their function, they help to shape human actions and perceptions,31
and create new practices and new ways of living”: page 92. In the dialectics that32
characterize this process of incorporation, the user, and their environment change33
and adapt accordingly to the specificities of the technological artifact; these adapta-34
tions feedback into innovation processes in industry, shaping the next generation of35
technologies and services. According to domestication theory, this process develops36
in four phases:37
1. Appropriation: When a technology leaves the world of commodity, it is appro-38
priated. Then, it can be taken by an individual or a household and owned.39
2. Objectification: This is expressed in usage but also in the dispositions of objects40
in lived space.41
3. Incorporation: The ways in which objects, especially technologies, are used.42
They may even become functional in ways somewhat removed from the initial43
intentions of designers or marketers.44
4. Conversion: Technology passes the household defines and claims itself and its45
members in the “wider society.”46
It is primarily through industrial design that the technological potential is trans-47
formed into attractive and easy-to-use products. In this process, the designer consid-48
ers not only obvious elements such as form, function, interaction, ergonomics, and49
materials but also more complex human issues such as the desires and idiosyncrasies50
of the intended audience and the fluctuations of fashion and trends [2].51
However, the incorporation of a new technological artifact is not only the result52
of its conceptual drawing and prototyping by those who have conceived it, but it is53
also the result of a complex process of interaction that comprehends its validation54
by end users, the emergence of the associated behavioral patterns, and the eventual55
deep or superficial updating according to the consumers’ feedback.56
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The Concept of Friendliness in Robotics: Ethical Challenges 91
Fig. 1 Shaking hands
2 Interacting in a Human-Like Way57
Robots, namely social robots, differentiate from most other technologies in that they58
are embodied, autonomous, and mobile technologies capable of navigating, sensing,59
and interacting in social environments in a human-like way.60
At the present, social robots are no longer just objects of research, they are not61
limited to the laboratory, and their performance is not restricted to technical demon-62
strations or to technical exhibitions for experts. The market for toy robots and hobby63
systems forecast of about 9.5 million units with about 994,000 robots for education.64
It is expected that the sales of robots for elderly and handicap assistance will reach65
about 32,900 units in the period of 2018–2020 [16]. Social robots already massively66
produced reflect in their conception the efforts of industrial design to attract con-67
sumers not only by complying to the features and rules of human habitats, but also68
by offering for the very first time an object to whose utility function is associated the69
capacity to establish interaction with users in a human-like way. By displaying be-70
haviors that people identify as sentient such as showing to recognize people’s faces,71
making eye contact, and responding socially exhibiting emotions, robots allow: (i) a72
machine to be understood in human terms, (ii) to relate socially with it, and (iii) to73
empathize.74
However all this also leads people to perceive the robot as animate, i.e., as being75
alive, endowed with agency and intentionality and capable of experiencing feelings76
toward them (Figs. 1and 2).77
If academic research traditionally has hardly ever targeted the problem of ap-78
pearance,1[4,8], being the established priorities the functional issues concerning79
1An exception to this is the FP7 MOnarCH project.
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
92 M. I. A. Ferreira
Fig. 2 Displaying affection
navigation, perception, manipulation ... industry acknowledges that the design op-80
tions a particular robotic application assumes—its dimensions, its materiality, how81
it looks, and how it interacts and engages with people—are crucial not only for fos-82
tering a rich human–robot interaction but also for better selling the product. What83
is presently offered to consumers by the most representative industry is generally a84
product that apart from its functional capacities is also capable of engaging emotion-85
ally with the user in a pleasant “friendly” way, following human patterns of behavior86
and either making use of verbal language or nonverbal cues.87
3 What Exactly Does “Acting Friendly” Mean?88
The Cambridge online dictionary [6] defines [friendliness] as: behaving in a pleasant,89
kind way toward someone. On the other hand in Freedictionary.com [17] [friendly]90
is defined as: outgoing and pleasant in social relations (Figs. 3and 4).91
The fundamental role played by having a friendly attitude in establishing success-92
ful social relations in every domain and context is commonly acknowledged. This93
importance is made salient by popular Web sites which try to provide readers with94
essential information on how to behave in a social adequate and successful manner.95
A good example of this is the Wiki webpage called—How to do anything ... [20]. The96
Web site organizes the information under the heading: How to be Friendly in three97
distinct parts, unfolding into several subparts which are actually recommendations:98
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The Concept of Friendliness in Robotics: Ethical Challenges 93
Fig. 3 Taking a selfie
Fig. 4 High-five
1. Being Approachable:99
a. Smile more;100
b. Have an open body language;101
c. Drop distractions;102
d. Make eye contact;103
e. Laugh easily.104
2. Mastering Friendly Conversation:105
a. Master small talk;106
b. Ask people about themselves;107
c. Compliment;108
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
94 M. I. A. Ferreira
d. Address saying the person’s name;109
e. Never respond on a neutral or busy way;110
f. Focus on positive focus;111
g. Open up.112
We will not go through an analysis on the nature of these recommendations, but113
we can immediately recognize some of those listed in 1. and 2. as being present in114
the way most robotic applications interact with their users:115
Smile;116
Make eye contact;117
Compliment;118
Address saying the person’s name;119
Ask people about themselves;120
Master small talk.121
4 Friendliness in Robotics—An Ethical Issue?122
The need for interaction with the Other(s) is an inalienable part of human condition,123
as human beings are inherently social. Identity depends on the interaction with the124
Other, this interaction being vital for the definition of who each of us is. The notion125
of Self presupposes the sense of alterity that comes associated with the definition126
of Otherness. Fiction has frequently illustrated this urge for the Other, the one I can127
interact with, in an interaction that is essential to human existence [5]. In the film128
“Cast Away” [11], a man, absolutely deprived from human contact, projects his own129
humanity into a ball to which he assigns human features: a face—the printing of130
his own bloody bare hand and a name—Wilson—creating this way that essential131
Otherness and electing that object as his equal.132
Bonding and attachment by human beings are consequently likely to be fostered133
by objects possessing lifelike appearance and endowed with human-like capacities,134
namely the capacity to act in a friendly way. This will almost inevitably facilitate135
the likelihood of people forming emotional attachments to artificial beings, by erro-136
neously finding them capable of genuine affective reciprocity [18].137
As it has been stressed out [1,9,10,14,15], this is particularly relevant to take138
into account when we are considering children and elderly people. In the case of139
seniors, this situation can be particular dramatic as either at home or in retirement140
residences, elderly people generally experience solitude and the absence of social141
and/or family ties. In the case of children who depend on genuine affective feedback142
for harmonious development, the permanent or even frequent interaction with an143
artificial entity will lead to a state of deception of still unpredictable consequences.144
These two specific populations are the ones that are by their frailty, by dementia or145
inexperience are particularly prone to engage in a relationship that in fact does not146
exist because it is totally unidirectional (Figs. 5and 6).147
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The Concept of Friendliness in Robotics: Ethical Challenges 95
Fig. 5 Wilson or the essential otherness
Fig. 6 Expressing care and affection towards a machine
A particularly relevant instance of this situation is the case of Paro [13].148
To make Paro’s interactions more realistic, Takanori Shibata even flew out to a149
floating ice field in Northeast Canada to record real baby seals in their natural habitat.150
In addition to replicating those sounds in the robot, he designed it to seek out eye151
contact, respond to touch, cuddle, remember faces, and learn actions that generate a152
favorable reaction. The result is a “sweet thing” that appears to seek attention and153
affection, being in this way identified by many as a positive stimulus namely to154
elderly people (Fig. 7).155
Just like animals used in pet therapy, Shibata argues, Paro can help relieve de-156
pression and anxiety—but it never needs to be fed and does not die. Paro has been157
selling thousands, even neurologists have been introducing them in hospital awards158
as a means to keep inpatients, namely those with dementia [12]. They can maintain159
this fundamental contact even when there is no one around for hours.160
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
96 M. I. A. Ferreira
Fig. 7 Affective interaction with Paro
Cases like this that lead to an inevitable state of deception and contribute to replace161
human interaction and human ties with an artificial succedaneum are an attack to162
human rights and a menace to human dignity whatever the stage of life human beings163
are in, whatever their physical or mental condition. Human interaction and human164
affection are irreplaceable. However we have to recognize that at the point an artificial165
entity is responsive to your personal needs and this response is highly customized,166
there likely will be some emotional connection between that artificial entity and the167
human being it is interacting with.168
When a robot becomes someone’s companion, an inevitable bond arises, whether169
you are a child or an elderly seeking affection. The designation “robot companion”170
has been favored when speaking of the type of relationship that may bond a person171
with their robot. [Companionship] contains a certain amount of warmth but is less172
demanding than the concept of [friendship] as it does not require the symmetry173
involved in this type of relationship that, according to [7], entails a series of features174
that makes the concept inherently human and impossible to replicate. According175
to the author [friendship] is a form of love, a social bond concretely situated and176
embodied, a relation essential for personhood, for having a self, and for having a177
responsible belonging to a community.178
According to him, the modern notion of friendship (i.e., not completely absent179
in ancient sources, cf. Aristotle on “perfect” friendship) is characterized by a set180
of features, addressed differently by philosophers, sociologists, and anthropologists,181
namely:182
the relation has a private rather than public character;183
it is affectionate and to some extent preferential and exclusive;184
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
The Concept of Friendliness in Robotics: Ethical Challenges 97
it is constituted by liking and caring for the person for their own;185
it is mutual, dialogic, and with some degree of realistic assessment of its nature;186
it is constituted by a sharing of important parts of the life, exchanges of thoughts187
and experiences, and thus also investigative, open to novelty, curious of better188
knowing the world, the other self, and how the other sees oneself;189
it is characterized by confidentiality and trust, making possible the sharing of190
secrets, disclosing things to the friend that one normally would keep outside the191
public sphere;192
it has entered voluntarily and is based upon mutual respect and regard for similar-193
ities as well as differences between friends;194
it presupposes a surplus of time and material goods, i.e., it is characterized by af-195
fordability and generosity (but not being focused upon or constituted by any needs196
for political or material of support in the fight for survival or social advancement);197
it is never perfect, accepting imperfections both in the relation and in the friend;198
it is vulnerable to the breaking off the relation by one of the friends.199
5 Conclusions200
All those involved in the design, production and deployment of social robots have to201
be aware of the following fundamental facts:202
1. Affective attachment is essential for human beings and is a human right on its203
own, whatever the stage of life people are in.204
2. Human users inevitably establish links with artificial entities apparently endowed205
with a capacity to reciprocate affection.206
3. Artificial entities exhibiting not only the accepted social behavioral patterns but207
also apparent emotional and affective attitudes toward users are deceptive.208
4. Artificial entities probably should be more neutral not displaying signs of affec-209
tion.210
References211
1. Sharkey AJC, Sharkey N (2010) The crying shame of robot nannies: an ethical appraisal.212
Interaction Studies 11:161–190213
2. Auger J (2014) Living With Robots: A Speculative Design Approach. Journal of Human-Robot214
Interaction 3(1):20–42. https://doi.org/10.5898/JHRI215
3. Berger T (2005) Domestication of Media and Technology. Open University Press, Milton216
Keynes, United Kingdom217
4. Breazeal C (2004) Designing sociable robots. MIT Press, Cambridge218
5. Cacioppo J, Patrick B (2008) Loneliness: human nature and the need for social connection.219
Norton, New York220
6. Cambridge Dictionary (2018) http://dictionary.cambridge.org/dictionary/english/ friendly221
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
98 M. I. A. Ferreira
7. Emmeche C (2014) Robot friendship: Can a robot be a friend? International Journal of Signs
222
and Semiotic Systems 3(2). Insights from Natural and Artificial Systems, Special issue on The223
Semiosis of Cognition224
8. Ferreira M, Sequeira J (2014) The concept of [robot] in children and teens: Some Guidelines to225
the Design of Social Robots. International Journal of Signs and Semiotic Systems 3(2):35–47226
9. Ferreira M, Sequeira J (2016) Making Believe or Just Pretending: the Problem of Deception227
in Children/Robots Interaction. In: Advances in Cooperative Robotics-Proceedings of the 19th228
CLAWAR Conference, World Scientific Publishing, London. UK229
10. Ferreira M, Sequeira J (2017) Robots in Ageing Societies. In: Sequeira J, Tohki O, Kadar E,230
Virk G (eds) Ferreira M. Springer International Publishing AG, A World with Robots231
11. Fox t, Dream Works (2000) Image Movers Playtone. Distributed by 2oth Century Fox (US)232
and Dream Works (International) . Release date 22 December 2000233
12. Paro newsletter (2018) http://www.cuhk.edu.hk/med/shhcgg/others/Paro_newsletter.pdf234
13. Paro robots (2018) http://www.parorobots.com/235
14. Sharkey N (2008) Computer science: the ethical frontiers of robotics. Science 322:1800–1801236
15. SparrowR, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Machines237
16:141–161238
16. Statista (2018) https://www.statista.com/statistics/748128/estimated- collaborative-robot-239
sales-worldwide/240
17. The Free Dictionary (2018) http://www.thefreedictionary.com/friendly241
18. Turkle S (2011) Alone together: why we expect more from technology and less from each242
other. Basic Books, New York243
19. VerbeekPPCC, (2008) Morality in design: design ethics and technological mediation. In: Ver-244
maas P, Kroes P, Light A, Moore S (eds) Philosophy and design: from engineering to architec-245
ture. Springer, Berlin, pp 91–102246
20. WikiHow (2018) http://www.wikihow.com/Be-Friendly247
462563_1_En_9_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 98 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Ethics, the Only Safeguard Against the Possible Negative Impacts of Autonomous Robots?
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Gelin
Particle
Given Name Rodolphe
Prefix
Suffix
Role
Division Innovation
Organization SoftBank Robotics Europe
Address Paris, France
Email rgelin@aldebaran.com
Abstract Companion robots will become closer and closer to us. They will enter our intimacy. This proximity will
raise ethical problems that strict technology per se will probably be just unable to solve. Even if research
tries to find out how ethical rules can be implemented in the robots’ cognitive architecture, does the ethics
implemented by the developer fit with the user’s ethics? In this paper, we propose a pragmatic approach to
this question by focusing on the aspect of responsibility. In case of misbehavior of a robot, who is
responsible? And even more pragmatically, who will pay for the eventually caused damages?
Keywords
(separated by '-')
Ethics - Responsibility - Companion robot - Regulation
UNCORRECTED PROOF
Ethics, the Only Safeguard Against
the Possible Negative Impacts
of Autonomous Robots?
Rodolphe Gelin
Abstract Companion robots will become closer and closer to us. They will enter our
1
intimacy. This proximity will raise ethical problems that strict technology per se will2
probably be just unable to solve. Even if research tries to find out how ethical rules can3
be implemented in the robots’ cognitive architecture, does the ethics implemented4
by the developer fit with the user’s ethics? In this paper, we propose a pragmatic5
approach to this question by focusing on the aspect of responsibility. In case of6
misbehavior of a robot, who is responsible? And even more pragmatically, who will7
pay for the eventually caused damages?8
Keywords Ethics ·Responsibility ·Companion robot ·Regulation9
1 Introduction10
After having been heroes of many science fiction books and movies, robots will soon11
become companions in everyday life. From the digital assistant, like Google Home12
or Amazon’s Alexa, to the humanoid robot, like SoftBank Robotics’ Pepper via the13
autonomous car or robotic vacuum cleaners, robotic technology is about to surround14
us. Even if these different kinds of robots are far less advanced than their science15
fiction models, they will raise new questions that our society will have to answer.16
These machines will spend a lot of time with us, listening to what we say, watching17
what we do in order to provide the right service at the right time. Autonomy and18
learning capability are features that are expected from robots, and these features19
require a very good knowledge of the user. If our smartphones can already access20
very intimate information about us, our robots, with their ability to move and to21
acquire required missing information, can become even more intrusive.22
In this paper, we will mainly consider the case of companion robots, focusing23
particularly on the case of assistance to elderly people. This use case has been inten-24
R. Gelin (B)
Innovation, SoftBank Robotics Europe, Paris, France
e-mail: rgelin@aldebaran.com
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3-030-12524-0_10
99
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
100 R. Gelin
sively studied cf. Harmo et al. [12] and illustrates, within short term and realistic25
application, the non-technical issues that the introduction of robots at home could26
raise. These issues, presented in Gelin [10], are recapitulated in the first section of27
this paper. It appears that some of these issues are related to ethical aspects: Is it28
ethically acceptable that a robot behaves in such or such way? In the second section,29
we explain that, even if it were possible to implement ethical judgment in a robotic30
brain, it would probably not be a good solution. We cannot ask the robot to be morally31
responsible for what it is doing. The question of the responsibility in the case of an32
accident involving a robot is the subject of the third section of this paper. If there33
are always humans behind the behavior of a robot, it will be very complicated to34
determine which component of the very complex robotic system is the cause of the35
failure. But beside the scientific question of knowing what went wrong, there is36
a much more pragmatic question: Who pays to compensate the victim? Strangely37
enough, these two questions may become rather independent. But in conclusion, we38
show that understanding the source of possible robotic dysfunction will be necessary39
mainly for the acceptability of robots in our society.40
2 Non-technological Issues Generated by the New Use41
Cases of Robots42
After the industrial robots, kept away from humans to fulfill their painting or welding43
tasks, the first service robots appeared mainly for cleaning tasks in public places.44
If these robots and the human were sharing the same environment, their interaction45
was mainly limited to an on/off button and a collision avoidance functionality. In46
the new use cases that appeared in recent years, the service robots turned into social47
robots and even into companion robots. The robots do not avoid people anymore,48
but rather search them out. The contact is not a physical one but cognitive: The robot49
wants to interact with people to give them information and to provide entertainment50
and assistance. These new robots greet people in public places, help teachers in the51
classroom, assist elderly people, or entertain families at home. In these new tasks,52
the robots are expected to listen to people, to watch them, to understand them, to53
know them, and to be able to provide the right service at the right time. As a family54
companion or assistant for the elderly, the robot will share the biggest part of the day55
with “its” humans. This proximity may raise new problems that would have probably56
not been considered by the pioneers of robotics as robotic problems (even if they57
have been identified by some science fiction authors).58
If we focus on the assistance to elderly people, the robot has three main missions:59
to insure the safety of the person, to maintain her social link with his entourage, and60
to assist her in daily tasks. To realize these services, the robot will rely on several61
features like activity recognition, cf. El-Yacoubi et al. [9] (to understand what the62
person is doing to propose the required assistance, remote control cf. Chang et al. [6]63
(for the management of a critical situation by a teleoperator), physical interaction cf.64
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Ethics, the Only Safeguard Against the Possible Negative 101
Haddadin et al. [11] (to stabilize the walk), object fetching cf. Agravante et al. [1]65
(to bring an object forgotten in another room), learning cf. Rossi et al. [19] (to adapt66
itself to the habit of the person), emotion recognition cf. Tahon and Devillers [20]67
(to adapt its behavior to the psychological state of the person) or emotion expression68
cf. Chevalier et al. [7] (to demonstrate the first level of empathy). The state of the art69
on these features is quite advanced today, and it is reasonable to think that they will70
be available in commercial products within a couple of years. But with these new71
features come new risks (at least new fears). It is the responsibility of the robotic72
community to bring solutions to mitigate these risks or to calm the unfounded fears.73
We can list some of them rapidly.74
To recognize my activity, the robot is permanently spying on me and can broadcast75
many things about my private life. The robot provider should guarantee control of76
the acquired data and offer easy possibility to make the robot deaf and blind for a77
specified time. In case of a complex situation in which the robot is not able to manage78
autonomously, a teleoperator should be able to remotely take control of the robot.79
And if a teleoperator can do that, a hacker could do it as well and ask the robot to80
steal things or hurt people. But all our computers are connected to the Internet and81
can be controlled remotely. Protections against hacking exist for computers so they82
will be used for robots too. The robot will be able to perform more and more tasks83
to make the user’s life easier. At some point, it will make the user lazy by doing84
everything for him. The elderly person could lose her remaining capabilities faster85
because the robot is too intrusive. It is the responsibility of the application developer86
to take into consideration the capabilities of the user. By implementing some aspects87
of the theory of mind cf. Pandey et al. [17], the application will only provide the88
service that is necessary and nothing more. It is accepted that future robots will learn89
how to behave based on their interactions with human beings. The example of the90
Tay chatbot from Microsoft cf. Miller et al. [16] shows that if bad intentioned users91
teach bad things to an artificial agent, they will create an evil artificial agent. Is it92
the responsibility of the robot manufacturer to ensure that human beings are always93
well intentioned? Probably not, but we will discuss this aspect in the next section.94
If the robot is capable of understanding and managing my emotions to adapt its95
behavior to my mind-set cf. Bechade et al. [3], it can manipulate me. Once again,96
the robot manufacturer can hardly be responsible for the ethics of the application97
developer. Even Asimov’s laws, cf. Asimov [2] did not try to achieve this. Another98
risk generated by the fact that the robot can manage its user’s emotion is to create99
an excessive attachment to the user who will prefer the company of the robot rather100
than the company of real human beings. The robot manufacturer and the application101
developer have some leverage on this. The manufacturer can give a not too appealing102
shape to its robot, as a reminder that it is just a robot (but people can be attached103
to an ugly demining robot cf. Tisseron [21]) and the application can check that the104
user keeps contact with her relatives. Last but not least, the riskof robotic assistance:105
Providing a robot as companion for lonely elderly people is the most cynical and106
dehumanizing solution to loneliness. Possibly, but it is the solution that roboticists107
can provide. If society can find better solutions, we will forget the robotic solutions.108
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
102 R. Gelin
3 Are Ethical Robots the Good Solution?109
Isaac Asimov, in his short story “Runaround” (1942) proposed the famous three laws110
of robotics. These laws, implemented in the positronic brain of each robot, represent111
a kind of consciousness that allows the robot to evaluate if the task it is performing112
respects basic rules for the well-being of humans and of robots: (1) do not injure113
humans; (2) obey humans; (3) protect its own existence. In the stories of Asimov, the114
engineers have been able to integrate, in what we would call the OS today, low-level115
real time tests able to analyze the current action of the robot, in the current context,116
to check if it will injure humans or jeopardize the robot itself. Nowadays, it is very117
difficult for a robot to understand the meaning of what it is doing. If the robot is118
asked to take a knife then to extend the arm in direction of a human, it is complicated119
for it to evaluate if this gesture will only give the knife to the human or will stab120
him. As roboticists, we are mainly desperately struggling for our robot to respect121
the second law. Not because robots would like to disobey but because they do not122
understand what we ask them or are unable to perform the requested task (taking123
a knife with the Pepper robot is a real challenge). But anyway, let’s consider that124
future roboticists are much cleverer than we are and succeed in implementing these125
three laws, Asimov’s work shows that it does not really prevent unexpected robotic126
behaviors. Beyond this, do we really want to implement this kind of rules? If we127
consider the most popular technological and useful object, the car, shall we accept128
that it blindly respects the traffic regulations? If our car forced us to respect the speed129
limits, the safety distance, the rules to cross a congested crossroad, would we accept130
it? We would probably find plenty of situations in which we would estimate that it131
is more relevant to break the rules.132
In Pandey et al. [18], we present the theoretical case of a robot that is asked not133
to do unethical things. The robot asks what an unethical thing is. One given answer134
is that saying private things in front of other people is unethical. The robot asks a135
private thing and the user says that he has two girlfriends (he is Italian). Later, when136
the user is alone, it asks the robot to call his girlfriend, and then the robot asks which137
girlfriend it should call. But when there is somebody with the user, the robot cannot138
ask which girlfriend it should call because it has been ordered not to say unethical139
thing like mentioning the fact that the user has two girlfriends. The two commands140
are contradictory, what should the robot do? We made a survey about this question141
and it appeared that the opinions of people are multiple. For some people, the robot142
should not care about privacy and obey to the last order, for others the privacy is of143
the utmost importance and the robot should obey the “background” order. No clear144
trend comes up. This kind of dilemma can happen to us every day and each of us145
would make his own way. How could a robot apply ethical rules that we, as human,146
have trouble defining?147
This question can be summarized with another one: If a robot manufacturer was148
able to implement ethic laws in the low level of the software (considering that the149
robot is able to evaluate the ethical aspects of what it is asked to do), whose ethics150
should the manufacturer implement in the robot. His own ethics or the user’s ethics?151
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Ethics, the Only Safeguard Against the Possible Negative 103
But how could the manufacturer know the ethics of a user he does not know when he152
builds the robot? Similar to the example described above, should the user explain to153
the robot what is ethical and what is not ethical? This kind of parameterization will154
be much more complex than the Wifi configuration. If the manufacturer implements155
his own ethics in the robot, how can he explain his ethics to the user? Today, users of156
technological devices do not even read the handbook of their product, will they read157
the manufacturer’s ethical code provided with the robot? It is often said that everyone158
on the planet shares the main rules of ethics. It was probably true, in Christian159
countries, until the seventeenth century. At that time, philosophy was theology and160
the ethical rules were given by the Bible. There was a consensus. But since the161
seventeenth century, people exercised free will. Each person can have his own ethics.162
If it is more satisfactory for the citizen, it is much less for the roboticist.163
Autonomous cars are raising a classical ethical problem: In an emergency situa-164
tion, should the car sacrifice its only driving passenger or save him and kill 3 children165
crossing the road abruptly? The first answer of a German car manufacturer was: “my166
only concern is the driver, he paid the car, the car will save him whatever happens167
to the non-passenger humans around.” This position was quite cynical but rather168
reassuring for the customer. Recently, the car manufacturer changed its position by169
declaring “my only concern is the law, when the law tells me what to do, I will imple-170
ment it.” Since the law does not say anything about this yet, the question remains171
open.172
Last but not least, let us assume that researchers succeed in defining ethical rules173
that are accepted by everyone. Let us consider they succeed in implementing these174
rules in the deep level of consciousness of the robotic brains and that these brains can175
process the required abstractions and evaluate when these rules should be broken cf.176
Malle and Scheutz [15]. Then, we have ethical robots capable of deciding if our orders177
are ethical or not and to disobey us if they consider that our demand is not ethical178
enough. The robot becomes the judge of our action; it is our external consciousness.179
Roboticists would have developed the Jiminy Cricket of the Pinocchio story. But if180
an embodied consciousness can be useful for a lying wooden puppet, is it what a181
human being with free will needs? Is it desirable that a machine decides for a human182
what is ethical or not? I do not think so. It would be a terrible renunciation of human183
responsibility. Humans should remain responsible for their acts and for the acts of184
their robots.185
4 Who Is Responsible?186
“No ethics” does not mean “no limits.” If it is probably impossible for a robot187
to detect if the task it is realizing is good or bad, it is possible to respect design188
rules to mitigate the risks during the human–robot interaction. The Internal Standard189
Organization has been working, over 20 years, on the norm 13482 cf. Jacobs and190
Virk [13] that specifies requirements and guidelines for the inherently safe design,191
protective measures, and information for use of personal care robots, in particular192
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
104 R. Gelin
the following three types of personal care robots: mobile servant robot, physical193
assistant robot, and person carrier robot. By respecting the recommendation given194
in this norm, the robot manufacturer minimizes the risks of dangerous behaviors of195
its robot. This norm indicates the limit speed of the joints according to their inertia,196
the range of relevant safety sensors, the maximum torque authorized in joints, etc.197
The first official release of the norm was published in 2014, and rather few robots198
are compliant with it yet. Some robot manufacturers prefer to refer to other less199
constraining standards like IEC 60950-1 (related to Radio equipment) or EN 71-x200
(Toy safety) to reach a certain level of conformity to a norm. These norms mainly deal201
with the risk generated by a physical contact between the human and the robot. They202
are not designed to prevent some psychological accident that a misbehavior of the203
robot could generate. The behavior of a robot is programmed (directly or indirectly)204
by a developer. The ethic of the behavior depends on the ethics of the developer.205
That is the reason why, in France cf. Collectif [8] and in Great Britain cf. BSI [5],206
researchers have described what should be, not the ethics of the robot, but rather the207
ethics of the robotic application developer. If technical tests can be used to check if208
the recommendations of the norm 13482 have been respected, it will be very difficult209
to evaluate the morality of all the developers who have developed the application210
running on the robot. These documents have the merit of defining good practices. A211
dissatisfied customer could refer to these good practices in case of problem with a212
robot. He (or his lawyer) will check if the robot manufacturer has respected the good213
practices and if its product is compliant to the usual standards. The first responsible214
suspect seems to be the manufacturer. Is it that simple?215
There are different use cases of humanoid robots; the responsibility in case of216
problem may change. In the case of a robot welcoming people in a supermarket, if the217
robot hurts, in any way, a customer (of the supermarket), this customer will complain218
to the supermarket manager. The manager will find an arrangement with his customer219
then, as a customer of the robot supplier, he will turn toward the robot supplier.220
The robot supplier is probably not the robot manufacturer and even less the robotic221
application developer. He has bought the robot and selected a software company222
to design the application that will run on the robot to welcome people. Depending223
on the problem that has occurred, the supplier will ask for an explanation either to224
the robot manufacturer or to the application developer. The robot manufacturer can225
determine himself if there was a problem on the robot because of a failure of a critical226
component. He will turn toward the manufacturer of this critical component. In the227
case of a domestic robot that will learn behaviors “lifelong, another stakeholder228
appears: the user himself who has trained the robot to do bad things—like the Tay229
case. Considering the impossibility, presented before, for the robot manufacturer to230
implement ethical filters in the robot, the person responsible for the bad behavior231
of the robot will be the user who trained his robot. Unless he can explain that the232
learning mechanism of the robot presented a bias generating a behavior that does not233
fit with what was taught to the robot. The supplier of the AI (learning mechanism)234
should be able to support and demonstrate that the wrong behavior that generated235
the problem was caused by the training and not by the learning mechanism. For that,236
he would need to store all the user’s training data. This can raise some problems of237
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Ethics, the Only Safeguard Against the Possible Negative 105
confidentiality (and of storage capability). But, similar to the black box in airplanes,238
this will probably be the only way to understand the genesis of the problem.239
With these examples, it appears that defining the responsible party of a robotic240
misbehavior can be very complex. That is the reason why Bensoussan and Ben-241
soussan [4] proposed to create a legal personality for the robot. Like companies, the242
robot would have a legal existence and could be declared responsible for a problem243
and ordered to pay an indemnity to the victim. Of course, this entails that the robot244
must have capital to be able to pay. This capital could come from a tax paid by the245
manufacturer and the buyer. From a theoretical point of view, this solution sounds246
appealing: The robot is legally responsible, and then it pays. But practically, it raises247
a question: In case of serious damage, the capital of the robot will not be enough to248
pay the indemnity. This would make the original tax very expensive, and it would249
make the robot hardly affordable. Another way would be that the money collected250
for each robot is put together in a huge indemnity fund. The principle of the legal251
responsibility of the robot vanishes: The community of the robots is responsible.252
This solution becomes close to the principle of insurance. So, it could be a practical253
solution to compensate the victim but from a more philosophical point of view, giving254
a legal personality to the robot can be misinterpreted in two ways: Firstly, the robot255
is responsible, as a human being with free will, because it has its own personality256
(forgetting the “legal” qualifier); secondly, having a robot responsible could induce257
the idea that no human being is responsible for the behavior of the robot. But, as258
the BSI guide reminds us, “it should be possible to find out who is responsible for259
any robot and its behavior.” Behind the behavior of a robot, there is always one (or260
several) human(s).261
It is difficult to believe that the legal personality of the robot will pay the required262
indemnity and stops inquiring. It (or its human representative) will probably look263
for the real agent responsible for the problem (the manufacturer, the application264
developer, the AI developer, the user, etc.) to get reimbursed. This cascade mechanism265
is what we experience today with car insurance companies. In case of a car accident,266
the insurance of the responsible driver pays the indemnity to the victim then looks267
for a possible other responsible (the car manufacturer, the city in charge of the roads,268
etc.) to get reimbursed itself. It is not driven by the love of knowledge and the search269
for the truth but the wish to be paid back by someone else. That is the reason why a270
good solution to compensate a victim of a robot accident could be the insurance. In271
the same way that it is mandatory to take insurance when one buys a car, it could be272
mandatory to have insurance for sophisticated robots. By the way, following further273
the example of cars, it is possible to envisage a license to use a robot. Driving a car274
is potentially dangerous and requires knowing some rules. This is the reason why275
a driving license is mandatory. When robots become more and more autonomous,276
their users have to understand some basic principles to avoid accidents. To award a277
robotic license is a way to ensure the users are aware of the powerful tool they have278
access to.279
To conclude the question of responsibility, if an insurance company can look for280
the real party responsible in case of an accident in order to get its money back, the281
manufacturer will be the first stakeholder interested in understanding what could282
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
106 R. Gelin
have happened. After plane crashes, or after the accidents that occurred with the283
first cruise control systems in cars, the manufacturer is the first concerned to clarify284
the origin of the problem: An accident is very bad for the corporate image. If a285
product is considered not safe, customers will not use it. If the insurance looks for286
the responsible party to get its money back, the manufacturer will look for the root287
cause of the problem to ensure its future gains.288
5 Conclusion289
The most important aspect, when considering questions of ethics and responsibilities290
about the behavior of robots, is to remember that robots are machines that have been291
designed, manufactured, and programmed by humans. However, there is also the292
matter of their autonomy or their ability to learn all along their existence. In that293
sense, the robot does not have free will. It just executes orders that were given by294
human beings: its user, the developer of the application, the manufacturer, etc. Of295
course, there can be conflicts between these orders. It is the responsibility of the296
manufacturer to propose solutions to deal with possible antagonistic orders. It is not297
simple. Considering the high level of complexity of the robotic software and the298
infinite variations of contexts that a service robot interacting with humans can meet,299
predicting the behavior of a robot will be a challenge for sure. But it does not mean that300
robotic stakeholders should communicate that a robot is unpredictable. For a robot301
manufacturer, it will be impossible to sell unpredictable, uncontrollable robots. The302
chain of command of the robot (the manufacturer, the application developer, the user)303
should commit to support the responsibility of possible accident.304
As the story of Microsoft’s Tay chatbot demonstrated, it is very difficult to control305
what a learning agent is learning. The robot is taught by its user but also by people it306
meets and, possibly, by information it will collect by itself on the Web. In the future,307
it will be possible to filter (first from a syntactic point of view then from a semantic308
one) the information that the robot should take into consideration for its learning, but309
on the one hand this kind of filter is a challenge by itself and, on the other, filtering the310
information is often impoverishing it. As parents, we first try to filter the information311
that our children have access to. Then, we try to give them filters so that they can312
select by themselves the information they will need to consider in order to become313
adults. But one day, they will meet other people, other points of view; they learn new314
behaviors that we do not always agree with. When they are adults, they are free, we315
are just observers. But when they are still minors, we are responsible for the way they316
behave. Our future learning robots may be considered as minor children: We do our317
best to train them properly but if training problems occur, we are the first responsible318
for the resulting behavior because it is our robot. We must check regularly if they still319
behave accordingly to our expectations. The difference with children is that someone320
else has manufactured the robot so it is possible to look, earlier in the “command321
chain,” if there is an “organic” origin of the problem. To find this root cause, whether322
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Ethics, the Only Safeguard Against the Possible Negative 107
the robot manufacturer will investigate for image reasons or the insurer of the robot323
will do it for financial reasons.324
Nevertheless, for the robot manufacturer and for the learning software developer,325
it will be very difficult to commit to guarantee that the robot behaves and learns326
properly in an unpredictable environment. In this domain of computer science, the327
formal proof of programs is very complex and probably impossible for the near328
future. How could robot producers demonstrate they have done everything possible329
to guarantee the functioning of the robot? The first answer is the standard: If the330
robot complies to some standards (like the ISO 13482 norm for instance), the robot331
manufacturer is protected. Then, considering the learning ability, a solution could332
be to see the brain of the robot as an active substance, like molecules are. For the333
pharmaceutical industry, it is very difficult to demonstrate that a new medicine will334
never have any secondary effects. The modeling of human physiology is too complex.335
To deal with this, protocols have been established. Regulations indicate the number336
and the kind of tests that must be done with the medicine before its validation by337
the medical authorities. The creation of standards for learning systems, defining338
validation protocols, would be a good way to protect the user and the developer of339
robotic learning brains. In the 1980’s, when the AI consisted mainly of expert systems340
based on rules, it was easy for the system to explain the rules that were triggered to341
reach the given conclusion. The reasoning of the computer mimicked the reasoning342
of the expert who could explain why he took a decision. Today, the “new” AI, based343
on the exponential growth of computation power, the sophistication of the learning344
algorithms and the gigantic amount of data that electronic brains are able to manage,345
makes the reasoning of machines very difficult to follow. Exploiting the experience346
of millions of example, the computer can take a conclusion that is statistically the347
most probable, thus likely a good one but without any other reason than saying348
“usually it is the way it works.” If this way of thinking is very efficient, it is, in a way,349
very disappointing from a knowledge point of view. We do not understand the way350
a system works; we just predict how it behaves. We do not extract the rules or the351
equations that describe the functioning of the system. If the system does not behave352
properly, the only way to correct it will be to retrain it to consider the unpredicted353
case. The validation protocol we proposed above is based on this principle. The354
developer of the AI system will not be able anymore to show the applied rules and355
the input data to explain the output of his system. He will have to show his training356
data set, to show there is no bias in it, but also the learning algorithm. It should be a357
necessary step toward the transparency of the system that is required if we want to358
have robots and AI accepted by the society. This is the responsibility of the researchers359
and of the industrial stakeholders to give access to all the required information to360
explain the behavior of their system. The interpretability of deep learning models is361
a strong trend in the AI community cf. Lipton [14]. Researchers are trying to extract362
abstraction from statistical data. If it works, it will be possible to say that AI has363
made a step forward toward real intelligence by understanding the phenomenon that364
it has modeled. If this abstraction is understandable by the human brain, AI will be365
able again to explain how it came up with a conclusion. This capacity to explain its366
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
108 R. Gelin
reasoning and then to exhibit who is responsible for what, one will make the Artificial367
Intelligence much more acceptable by society.368
References369
1. Agravante DJ, Claudio G, Spindler F, Chaumette F (2017) Visual servoing in an optimiza-370
tion framework for the whole-body control of humanoid robots. IEEE Robot Autom Lett371
2(2):608–615372
2. Asimov I (1951) I, robot. Gnome Press373
3. Bechade L, Dubuisson-Duplessis G, Pittaro G, Garcia M, Devillers L (2018) Towards metrics374
of evaluation of pepper robot as a social companion for the elderly. In: Eskenazi M, Devillers375
L, Mariani J (eds) 8th international workshop on spoken dialog systems: advanced social376
interaction with agents. Springer, Berlin377
4. Bensoussan A, Bensoussan J (2015) Droit des robots. Éditions Larcier378
5. British Standards Institute: BS8611: 2016 Robots and robotic devices: guide to the eth-379
ical design and application of robots and robotic systems, BSI London (2016). ISBN380
9780580895302381
6. Chang S, Kim J, Kim I, Borm JH, Lee C, Park JO (1999) KIST teleoperation system for382
humanoid robot. In: Proceedings of 1999 IEEE/RSJ international conference on intelligent383
robots and systems, vol 2. IROS’99. IEEE, pp 1198–1203384
7. Chevalier P, Martin JC, Isableu B, Bazile C, Tapus A (2017) Impact of sensory preferences of385
individuals with autism on the recognition of emotions expressed by two robots, an avatar, and386
a human. Auton Robots 41(3):613–635387
8. Collectif C (2014) Ethique de la recherche en robotique. Doctoral dissertation, CERNA;388
ALLISTENE389
9. El-Yacoubi MA, He H, Roualdes F, Selmi M, Hariz M, Gillet F (2015) Vision-based recognition390
of activities by a humanoid robot. Int J Adv Rob Syst 12(12):179391
10. Gelin R (2017) The domestic robot: ethical and technical concerns. In: Aldinhas Ferreira M,392
Silva Sequeira J, Tokhi M, Kadar E, Virk G (eds) A world with robots. Intelligent systems,393
control and automation: science and engineering, vol 84. Springer, Cham394
11. Haddadin S, Albu-Schaffer A, De Luca A, Hirzinger G (2008) Collision detection and reaction:395
a contribution to safe physical human-robot interaction. In: IEEE/RSJ international conference396
on intelligent robots and systems. IROS. IEEE, pp 3356–3363397
12. Harmo P, Taipalus T, Knuuttila J, Vallet J, Halme A (2005) Needs and solutions-home automa-398
tion and service robots for the elderly and disabled. In: 2005 IEEE/RSJ international conference399
on intelligent robots and systems (IROS 2005). IEEE, pp 3201–3206400
13. Jacobs T, Virk GS (2014) ISO 13482-The new safety standard for personal care robots. In Pro-401
ceedings of ISR/Robotik 2014; 41st international symposium on robotics. VDE, pp 1–6402
14. Lipton ZC (2016) The mythos of model interpretability. arXiv preprint arXiv:1606.03490403
15. Malle BF, Scheutz M (2014) Moral competence in social robots. In: 2014 IEEE international404
symposium on ethics in science, technology and engineering. IEEE, pp 1–6405
16. Miller KW, Wolf MJ, Grodzinsky FS (2017) Why we should have seen that coming: comments406
on microsoft’s Tay “Experiment,” and wider implications407
17. Pandey AK, de Silva L, Alami R (2016) A novel concept of human-robot competition for408
evaluating a robot’s reasoning capabilities in HRI. In: The eleventh ACM/IEEE international409
conference on human robot interaction. IEEE Press, pp 491–492410
18. Pandey AK, Gelin R, Ruocco M, Monforte M, Siciliano B (2017) When a social robot might411
learn to support potentially immoral behaviors on the name of privacy: the dilemma of privacy412
versus ethics for a socially intelligent robot. In: Privacy-sensitive robotics 2017. HRI413
19. Rossi S, Ferland F, Tapus A (2017) User profiling and behavioral adaptation for HRI: a survey.414
Pattern Recogn Lett 99:3–12415
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Ethics, the Only Safeguard Against the Possible Negative 109
20. Tahon M, Devillers L (2016) Towards a small set of robust acoustic features for emotion
416
recognition: challenges. IEEE/ACM Trans Audio Speech Lang Process 24(1):16–28417
21. Tisseron S (2015) Le jour mon robot m’aimera: Vers l’empathie artificielle. Albin Michel418
462563_1_En_10_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 109 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Malle
Particle
Given Name Bertram F.
Prefix
Suffix
Role
Division Department of Cognitive, Linguistic and Psychological Sciences
Organization Brown University
Address 190 Thayer Street, Providence, RI, USA
Email bfmalle@brown.edu
Author Family Name Magar
Particle
Given Name Stuti Thapa
Prefix
Suffix
Role
Division Department of Psychological Sciences
Organization Purdue University
Address 703 3rd Street, West Lafayette, IN, USA
Email sthapama@purdue.edu
Author Family Name Scheutz
Particle
Given Name Matthias
Prefix
Suffix
Role
Division Department of Computer Science
Organization Tufts University Halligan Hall
Address 161 College Avenue, Medford, MA, USA
Email matthias.scheutz@tufts.edu
Abstract Even though morally competent artificial agents have yet to emerge in society, we need insights from
empirical science into how people will respond to such agents and how these responses should inform
agent design. Three survey studies presented participants with an artificial intelligence (AI) agent, an
autonomous drone, or a human drone pilot facing a moral dilemma in a military context: to either launch a
missile strike on a terrorist compound but risk the life of a child, or to cancel the strike to protect the child
but risk a terrorist attack. Seventy-two percent of respondents were comfortable making moral judgments
about the AI in this scenario and fifty-one percent were comfortable making moral judgments about the
autonomous drone. These participants applied the same norms to the two artificial agents and the human
drone pilot (more than 80% said that the agent should launch the missile). However, people ascribed
different patterns of blame to humans and machines as a function of the agent’s decision of how to solve
the dilemma. These differences in blame seem to stem from different assumptions about the agents’
embeddedness in social structures and the moral justifications those structures afford. Specifically, people
less readily see artificial agents as embedded in social structures and, as a result, they explained and
justified their actions differently. As artificial agents will (and already do) perform many actions with
moral significance, we must heed such differences in justifications and blame and probe how they affect
our interactions with those agents.
Keywords
(separated by '-')
Human-robot interaction - Moral dilemma - Social robots - Moral agency - Military command chain
UNCORRECTED PROOF
AI in the Sky: How People Morally
Evaluate Human and Machine Decisions
in a Lethal Strike Dilemma
Bertram F. Malle, Stuti Thapa Magar and Matthias Scheutz
Abstract Even though morally competent artificial agents have yet to emerge in
1
society, we need insights from empirical science into how people will respond to such2
agents and how these responses should inform agent design. Three survey studies3
presented participants with an artificial intelligence (AI) agent, an autonomous drone,4
or a human drone pilot facing a moral dilemma in a military context: to either launch a5
missile strike on a terrorist compound but risk the life of a child, or to cancel the strike6
to protect the child but risk a terrorist attack. Seventy-two percent of respondents7
were comfortable making moral judgments about the AI in this scenario and fifty-8
one percent were comfortable making moral judgments about the autonomous drone.9
These participants applied the same norms to the two artificial agents and the human10
drone pilot (more than 80% said that the agent should launch the missile). However,11
people ascribed different patterns of blame to humans and machines as a function12
of the agent’s decision of how to solve the dilemma. These differences in blame13
seem to stem from different assumptions about the agents’ embeddedness in social14
structures and the moral justifications those structures afford. Specifically, people15
less readily see artificial agents as embedded in social structures and, as a result, they16
explained and justified their actions differently. As artificial agents will (and already17
do) perform many actions with moral significance, we must heed such differences in18
justifications and blame and probe how they affect our interactions with those agents.19
B. F. Malle (B)
Department of Cognitive, Linguistic and Psychological Sciences, Brown University,
190 Thayer Street, Providence, RI, USA
e-mail: bfmalle@brown.edu
S. T. Magar
Department of Psychological Sciences, Purdue University,
703 3rd Street, West Lafayette, IN, USA
e-mail: sthapama@purdue.edu
M. Scheutz
Department of Computer Science, Tufts University Halligan Hall,
161 College Avenue, Medford, MA, USA
e-mail: matthias.scheutz@tufts.edu
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_11
111
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
112 B. F. Malle et al.
Keywords Human-robot interaction ·Moral dilemma ·Social robots ·Moral20
agency ·Military command chain21
1 Introduction and Background22
Autonomous, intelligent agents, long confined to science fiction, are entering social23
life at unprecedented speeds. Though the level of autonomy of such agents remains24
low in most cases (Siri is not Her, and Nao is no C3PO), increases in autonomy25
are imminent, be it in self-driving cars, home companion robots, or autonomous26
weapons. As these agents become part of society, they no longer act like machines.27
They remember, reason, talk, and take care of people, and in some ways people treat28
them as humanlike. Such treatment involves considering the machines’ thoughts,29
beliefs, intentions, and other mental states; developing emotional bonds with those30
machines; and regarding them as moral agents who are to act according to society’s31
norms and who receive moral blame when they do not. We do not have robots yet32
that are themselves blamed for their norm-violating behaviors; but it may not be33
long before such robots are among us. Perhaps not in the eyes of scholars who do34
not believe that robots can be blamed or held responsible (e.g., [10,38]); but very35
likely in the eyes of ordinary people. Anticipating people’s responses to such moral36
robots is an important topic of research into both social and moral cognition and37
human–robot interaction.38
A few previous studies have explored people’s readiness to ascribe moral prop-39
erties to artificial agents. In one study, a majority of people interacting with a robot40
considered the robot morally responsible for a mildly transgressive behavior [18].41
One determinant of people’s blame ascriptions to a transgressive robot is whether42
the robot is seen as having the capacity to make choices [31], whereas learning about43
an AI’s algorithm does not influence people’s judgments that an AI did something44
“wrong” [37]. People’s moral actions toward a robot are affected by the robot’s emo-45
tional displays of vulnerability [7], and studies have begun to examine the force of46
moral appeals that robots express to humans [28,40]. In recent work, we have di-47
rectly compared people’s evaluations of human and artificial agents’ moral decisions48
[24,25,43]. These studies suggested that about two-thirds of people readily accept49
the premise of a future moral robot, and they apply very similar mechanisms of moral50
judgment to those robots.51
But very similar is not identical. We must not assume that people extend all human52
norms and moral information processing to robots [21]. In fact, people blame robots53
more than humans for certain costly decisions [24,25], possibly because they do54
not grant robot agents the same kinds of moral justifications for their decisions. It is55
imperative to investigate and understand these distinct judgments of artificial agents’56
actions before we design robots that take on moral roles and before we pass laws57
about robot rights and obligations. Behavioral science can offer insights into how58
people respond to moral robots—and those responses must guide the engineering of59
future robots in society.60
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 113
In some areas of society, robots are fast advancing toward roles with moral signifi-61
cance; the military forms one such area. Investments into robot research and engineer-62
ing have been substantial in many industrial nations [26,35,45] and human–machine63
interactions are moving from remote control (as in drones) to advisory and team-64
based. Tension is likely to occur in teams when situations become ambiguous and65
actions potentially conflict with moral norms. In such cases, who will know better—66
human or machine? Who will do the right thing—human or machine? The answer is67
not obvious, as human history is replete with norm violations, from minor corruption68
to unspeakable atrocities, and the military is greatly concerned about such violations69
[27]. If we build moral machines at all [44] then they should meet the highest ethical70
demands, even if humans do not always meet them. Thus, pressing questions arise71
over what norms moral machines should follow, what moral decisions they should72
make, and how humans evaluate those decisions.73
In taking on these questions of moral HRI [24], we introduce two topics that have74
generated little empirical research so far. First, previous work has focused on robots75
as potential moral agents; in our studies, we asked people to consider autonomous76
drones and disembodied artificial intelligence (AI) agents. The public often thinks77
of drones when debating novel military technology, perhaps just one step away from78
lethal autonomous weapons—a topic of serious concern for many scientists, legal79
scholars, and citizens [1,3,32,38]. AI agents have recently attracted attention in the80
domain of finance and employment decisions, but less so in the domain of security.81
Previous research suggests that AI agents may be evaluated differently from robot82
agents [25], but more systematic work has been lacking.83
Second, in light of recent interest in human–machine teaming [9,15,33], we84
consider the agent’s role as a member of a team and the impact of this role on moral85
judgments. In the military, in particular, many decisions are not made autonomously,86
but agents are part of a chain of command, a hierarchy with strict social, moral, and87
legal obligations.88
The challenging questions of human–machine moral interactions become most89
urgent in what is known as moral dilemmas situations in which every available90
action violates at least one norm. Social robots will inevitably face moral dilemmas91
[5,20,29,36]. Dilemmas are not the only way to study emerging moral machines,92
but they offer several revealing features. Dilemmas highlight a conflict in the norm93
system that demands resolution, and because an agent must respond (inaction is a94
response), we can examine how people evaluate machines’ and humans resolutions.95
Examining moral dilemmas also allows experimental manipulation of numerous96
features of the scenario, such as high versus low choice conflict, mild versus severe97
violations, and different levels of autonomy.98
For the present studies, we entered the military domain because important ethical99
debates challenge the acceptability of autonomous agents with lethal capabilities,100
and empirical research is needed to reveal people’s likely responses to such agents.101
We offer three studies into people’s responses to moral decisions made by either102
humans or artificial agents, both embedded into a human command structure.103
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
114 B. F. Malle et al.
The immediate inspiration for the studies’ contents came from a military dilemma104
in the recent film Eye in the Sky [16]. In short, during a secret operation to capture105
terrorists, the military discovers that the targets are planning a suicide bombing. But106
just as the command is issued to kill the terrorists with a missile strike, the drone pilot107
notices a child entering the missile’s blast zone and the pilot interrupts the operation.108
An international dispute ensues over the moral dilemma: delay the drone strike to109
protect the civilian child but risk an imminent terrorist attack, or prevent the terrorist110
attack at all costs, even risking a child’s death.111
We modeled our experimental stimuli closely after this plotline but, somewhat112
deviating from the real military command structure [6], we focused on the pilot as113
the central human decision maker and compared him with an autonomous drone114
or with an AI. We maintained the connection between the central decision maker115
and the command structure, incorporating decision approval by the military and le-116
gal commanders. The resulting narrative is shown in Fig. 1, with between-subjects117
agent manipulations separated by square brackets and colors. (The narratives, ques-118
tions, and results for all studies can be found in the Supplementary Materials, http://119
research.clps.brown.edu/SocCogSci/AISkyMaterial.pdf.)120
In designing this scenario, we wanted to ensure that the chain of command is clear121
but that the normative constraint is one of the permissions, not of strict obligation.122
Any soldier in this situation (human or artificial) has a general obligation to make123
decisions that are in line with the military’s mission (e.g., to eliminate terrorist threats)124
but that also have to be in line with humanitarian lawsabout minimizing civilian losses125
[17]. We did not aim to study a situation of disobedience to a strict command but126
one of partially autonomous decision making: permission to A still leaves room to A127
or to not-A. The question then becomes how observers evaluate the agents’ decision128
one way or another.129
We investigated three questions about people’s moral judgment of artificial agents.130
The first is a prerequisite for moral HRI and still a debated issue: whether people find131
it appropriate at all to treat artificial agents as targets of moral judgment. The second132
is what moral norms people impose on human and artificial agents and whether133
the right action varies by agent type. The third is how people morally evaluate the134
agents’ decisions through judgments of wrongness or blame [23]. Scholars have135
debated whether artificial agents are morally superior to humans in life-and-death136
scenarios (e.g., [2,41]) or should not be moral decision makers at all (e.g., [10,12,137
38]). Because acceptance of robots in society will depend largely on ordinary people’s138
conceptual assumptions and cognitive responses, we focus on an assessment of lay139
views; and because morality is ultimately a social practice [12,39,42], knowing140
about lay people’s judgments does tell us about morality as it is currently applied,141
and may be applied to future robots.142
Study 1 examined whether any asymmetry exists between a human and artificial143
moral decision maker in the above military dilemma. Studies 2 and 3 replicated the144
finding and tried to distinguish between two possible interpretations of the results.145
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 115
Fig. 1 Experimental material (narrative, dependent variables, and follow-up questions) for Study
1. The between-subjects manipulation of Agent (human drone pilot, autonomous drone, AI agent)
is indicated by different font colors and square brackets; the between-subjects manipulation of
Decision (launch the strike vs. cancel the strike) is indicated by square brackets
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
116 B. F. Malle et al.
2 Study 1146
2.1 Methods147
Participants. We recruited a total of 720 participants from the online crowdsourcing148
site Amazon Mechanical Turk (AMT); two participants did not enter any responses149
and ended the study early; four provided no text responses, critical for our analyses.150
Given our previous studies on human–robot comparisons in moral dilemmas [24],151
we assumed an effect size of Cohen’s d=0.30 for the human–machine asymmetry152
contrast. Detecting such an effect with power of 0.80 and p<0.05 requires a sample153
size of n=90 in each cell. However, we also knew from our previous studies that154
about 35% of participants reject the experiment’s premise of an artificial agent as a155
moral decision maker. Thus, we expanded the corresponding conditions for artificial156
agents to 135 per cell, expecting approximately 90 participants to accept the experi-157
ment’s premise. Each participant received $0.35 in compensation for completing the158
short task (3 min).159
Procedure and Measures. Each participant read the narrative displayed in Fig. 1160
one paragraph at a time, having to click on a button to progress. After they read the161
entire narrative (with the experimentally manipulated decision at the end), we asked162
people to make two moral judgments: whether the agents’ decision was morally163
wrong (Yes vs. No) and how much blame the agent deserved for the decision. The164
order of the questions was fixed because of the additional information that blame165
judgments require over and above wrongness judgments [23,43]. After making each166
judgment, participants were asked to explain the basis of the judgment (“quote”).167
We included four measures to control for the possible influence of conservative168
attitudes (religious, support for military, support for the drone program, ideology;169
see Supplementary Materials). They formed a single principal component (λ=2.09)170
with reasonable internal consistency (α=0.68) and were averaged into a conser-171
vatism score. However, controlling for this composite did not change any of the172
analyses reported below.173
We also included an open-ended question that probed whether participants had174
encountered “this kind of story before, either in real life or in an experiment.” We175
classified their verbal responses into No (84%), Yes (3.6% indicated they saw it in176
a film, 3.9% in the news, 7.1% in an experiment). When analyzing the data of only177
those who had never encountered the story, all results reported below remained the178
same or were slightly stronger.179
Design and Analysis. The 3 ×2 between-subjects design crossed a three-level180
Agent factor (human pilot vs. drone vs. AI) with a two-level Decision factor (launch181
the strike vs. cancel the strike). We defined aprioriHelmert contrasts for the Agent182
factor, comparing (1) human agent to the average of the two artificial agents and183
(2) the autonomous drone to the AI. As in previous work, we considered any main184
effect of Decision across agents as resulting from the specifics of the narrative 185
the balance between the two horns of the dilemma. A main effect of Agent may186
point to a possible overall tendency of blaming machines more or less than humans.187
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 117
However, such a conclusion must remain tentative because blame scales are, like188
most judgment scales, subject to effects of standards of comparison (see [4]) and the189
between-subjects design does not guarantee that people use the same standards for190
both agents. For our purposes of potential human–machine asymmetries, the critical191
test rested in the interaction term of Agent ×Decision, which indicates differential192
judgments for human versus machine depending on the agents’ decision and is robust193
against any narrative and scaling effects.194
To identify participants who did not accept the premise of the study—that artificial195
agents can be targets of moral judgment—we followed previously established clas-196
sification procedures [24] of the verbal explanations people provide for their moral197
judgments. For the present studies, we developed automatic text analysis using key-198
word searches, marking phrases such as: “doesn’t have a moral compass, “it’s not199
a person,” “it’s a machine,” “merely programmed, “it’s just a robot” (for details200
see Supplementary Materials). We also marked phrases in which participants indi-201
cated that all or partial blame should accrue to the machine’s programmer, creator, or202
manufacturer. (Blame shared with superiors was not grounds for marking.) After the203
automatic text analyses, human judges read through a subset of the responses as well,204
to mark any additional ones not identified by the automatic text analysis or removing205
ones that were incorrectly classified. Interjudge reliability among two human coders206
was between 93 and 96% agreement across the studies, κs=0.86 to 0.98, and reli-207
ability between automatic text analysis and human coders was between 94 and 96%208
agreement, κs=0.86 to 0.92.209
2.2 Results210
Following the above procedure, we identified 29.2% of participants who expressed211
serious doubts about the AI’s eligibility for moral evaluation and 50.0% who ex-212
pressed doubts about the drone’s eligibility. Analyzing moral judgments of robots213
and AI would not make sense for participants who explicitly distance themselves214
from the possibility of making such judgments, so we excluded these participants215
from the analyses reported below, resulting in a sample of 501 participants, 214 eval-216
uating artificial agents and 177 evaluating the human agent. (All results still hold217
in the entire sample but with lower overall blame levels for artificial agents; see218
Supplementary Materials for complete means and standard deviations.)219
Moral wrongness. People were generally accepting of both decisions (launch or220
cancel), as only 22.2% of the sample declared either decision as “morally wrong.”221
However, more people regarded the human pilot’s decision to cancel as wrong (25.8%222
of 89) than the decision to launch (14.8% of 88), whereas the reverse was true223
for the two artificial agents: more people considered the drone’s or AI’s decision224
to launch as wrong (27.0% of 159) than the decision to cancel (19.4% of 165).225
Accordingly, a logistic regression (n=501) on the probability of calling the decision226
morally wrong found the interaction between Decision and Agent to be significant,227
and specifically the first a priori contrast between human and the average of drone and228
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
118 B. F. Malle et al.
Fig. 2 Columns represent average blame ratings (and indicate cell sizes at column base) in Study 1
as a function of the manipulated factors of Agent (AI, Autonomous drone, human drone pilot) and
Decision (to launch or to cancel a missile strike on a terrorist compound, while risking the life of
a nearby child). Cohen’s deffect sizes for the cancel–launch asymmetry in blame are 0.01 (AI),
0.16 (Drone), and 0.55 (Human pilot)
AI, Wald(1)=6.09, p=0.014, corresponding to d=0.18. The second contrast,229
between drone and AI, showed no difference, Wald <1, p=0.38.230
Blame judgments. In the analysis of moral blame (n=501), canceling received231
overall more blame (M=47.2) than launching (M=39.3), F(1,495)=6.65, p=232
0.01, d=0.23. However, this main effect broke down into two distinct patterns for233
human and machine (Fig. 2). Whereas the human pilot received considerably more234
blame for canceling (M=54.2) than for launching (M=35.3), the artificial agents235
received on average roughly the same amount of blame for canceling ( M=43.4)236
as for launching (M=41.5); interaction F(1,495)=7.24, p=0.007, d=0.25.237
Blame for the two artificial agents did not differ, F(1,495)<1, p=.46).1
238
2.3 Discussion239
A first important finding of Study1 was that between 30 and 50% of people raised240
doubts about the artificial agents’ eligibility as targets of moral blame. The greater241
level of doubt for the autonomous drone is noteworthy, as the narrative describes242
the drone as having “a state-of-the-art Artificial Intelligence (AI) decision system243
on board”; so under the hood, it is no different from the “state-of-the-art Artificial244
Intelligence (AI) decision agent” on board a military aircraft in the AI condition.245
Nonetheless, talk of a drone, normally a remote-controlled flying hull, convinced246
fewer lay perceivers that the machine can be treated as a target of moral blame. By247
1Analysis of potential gender differences suggested that the wrongness asymmetry was unaffected
by gender but that the blame asymmetry was driven by female respondents. However, this gender
effect was not replicated in Study 2 and is therefore considered spurious.
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 119
contrast, the rate of accepting the AI as a target of blame was close to that we have248
found for robots in previous moral dilemmas [24,25].249
The second important finding was that a human–machine asymmetry emerged in250
people’s moral judgments. Taking wrongness and blame together, the human pilot’s251
cancelation decision was judged more negatively than the launch decision; such a252
cancel–launch asymmetry in blame did not exist for the AI or the autonomous drone.253
At least two processes could explain this asymmetry between human and artificial254
agents. First, people may impose different norms on human and artificial agents.255
Humans may be more strongly obligated to intervene (launching the missile and256
taking out the terrorists) than are artificial agents, and violating a stronger obligation257
(here, by canceling the strike) naturally leads to more blame. Second, people might258
grant the human and the artificial agents differential moral justifications for their259
actions. In particular, people may find the pilot to be justified in executing the action260
approved by the commanders (hence deserving less blame for launching) but less261
justified in going against this approved action (hence deserving more blame for262
canceling). Such a difference in justifications would follow from perceiving the263
human as deeply embedded in the military command structure. By contrast, if the264
artificial agents are seen a less deeply embedded in such a social structure, then265
no greater blame for canceling than for launching should be expected; the artificial266
agents receive no mitigation for going along with the commanders’ recommendation267
and no penalty for going against it.268
In the next two studies, we examined these explanations and also sought to repli-269
cate the basic pattern of Study 1. Study 2 assessed the potentional difference in270
norms; Study 3 assessed the potential impact of command structure justifications.271
3 Study 2272
In Study 2, we again featured an AI and a drone as the artificial agents and contrasted273
them with a human pilot. However, we wondered whether the label “autonomous”274
in Study 1’s narrative (repeated three times for the drone and once for the AI) made275
the machine’s independence from the command structure particularly salient and276
thus produced the effect. We therefore omitted this label in all but the respective277
introductory sentences of the narrative (“A fully autonomous, state-of-the-art Artifi-278
cial Intelligence (AI) decision agent...”; “A fully autonomous military drone, with a279
state-of-the-art Artificial Intelligence (AI) decision system on board”). In addition,280
trying to account for the human–machine asymmetry in Study 1, we tested the first281
candidate explanation for the asymmetry—that people impose different norms on282
human and artificial agents. Specifically, we asked participants what the respective283
agent should do (before they learned what the agent actually did); this question cap-284
tures directly what people perceive the respective agent’s normative obligation to285
be.2
286
2The conditions for this study were originally conducted on two separate occasions, a few weeks
apart, comparing AI to human and then comparing drone to human. We combined these conditions
for all analyses below.
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
120 B. F. Malle et al.
3.1 Methods287
Participants. We recruited a total of 770 participants from Amazon Mechanical288
Turk; five did not enter any responses and canceled the study; three provided no text289
responses. We again oversampled for the artificial agent conditions, 135 in each AI290
condition and 160 in each drone condition, and targeted 90 in each human condition.291
Each participant was paid $0.30 for the study.292
Procedure. No change was made to Study 1’s narrative except that the word293
“autonomous” was removed from all but the first sentence of both the AI and the drone294
narrative. To measure people’s normative expectations in resolving the dilemma, we295
inserted a should question before participants learned about the agent’s decision.296
Participants answered the question What should the [agent] do? in an open-ended297
way, and 98% provided a response easily verbally classifiable as launch or cancel.298
Because the moral wrongness question had shown a similar pattern as the blame299
question and low rates overall in Study 1, we omitted the wrongness question in Study300
2, thereby also minimizing the danger of asking participants too many questions301
about semantically similar concepts. After the should question, people provided302
their blame judgments and corresponding explanations (“Why does it seem to you303
that the [agent] deserves this amount of blame?”). Thus, the study had a 3 (Agent:304
human pilot, AI, drone) ×2 (Decision: launch vs. cancel) between-subjects design,305
with two dependent variables: should and blame. For the Agent factor, we again306
defined Helmert contrasts, comparing (1) the human agent to the average of the two307
artificial agents and (2) the drone to the AI.308
3.2 Results309
Following the same procedures as in Study 1, we identified 25.8% of participants310
who expressed doubts about the AI’s moral eligibility and 47.5% who expressed311
such doubts about the drone. All analyses reported below are based on the remaining312
541 participants (but the results are very similar even in the full sample).313
Norms. People did not impose different norms on the three agents. Launching the314
strike was equally obligatory for the human (M=83.0%), the AI (M=83.0%),315
and the drone (M=80%). A logistic regression confirmed that neither human and316
artificial agents (p=0.45) nor AI and drone ( p=0.77) differed from one another.317
Blame judgments. We again found generally greater blame across agents for can-318
celing (M=51.7) than for launching ( M=40.3), F(1,535)=13.6, p<0.001,319
d=0.30, in line with the result that over 80% of people recommended launching. We320
replicated the human–machine asymmetry from Study 1: Whereas the human pilot re-321
ceived far more blame for canceling (M=52.4) than for launching (M=31.9), the322
artificial agents together received similar levels of blame for canceling (M=44.6)323
as for launching (M=36.5), interaction F(1,535)=4.02, p=0.046, d=0.19.324
However, as Fig. 3shows, while the cancel–launch blame difference for the hu-325
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 121
Fig. 3 Columns represent average blame ratings (and cell sizes at column base) in Study 2 as a
function of the manipulated factors of Agent (AI, Drone, Human) and Decision (to launch or to
cancel the strike). Cohen’s deffect sizes for the cancel–launch asymmetry in blame are 0.04 (AI),
0.36 (Drone), and 0.58 (Human pilot)
man pilot was strong, d=0.58, that for the drone was still d=0.36, above the326
AI’s (d=0.04), though not significantly so, F(1,535)=2.2, p=0.13. Introduc-327
ing gender or conservative ideology into the model did not change the results.328
3.3 Discussion329
Study 2 replicated the human–machine asymmetry in judgments of blame, albeit with330
a less clear-cut pattern for the drone. The somewhat higher cancel–launch blame331
difference for the drone in Study 2 (d=0.36) than in Study 1 (d=0.16) might332
have resulted from our removing three instances of the word “autonomous” from333
the drone narrative, thereby decreasing the drone’s independence from the command334
structure. It may also be the result of the should question preceding people’s blame335
judgments in Study 2, as over 80% of people said the drone should launch, but then336
half of them learned that it canceled, highlighting even the drone’s “disobedience.”337
However, this violation also appeared for the AI, so people must have experienced338
the insubordinate drone as less acceptable than the insubordinate AI (the two differed339
clearly only in the cancel condition; see Fig. 3). Yet another interpretation treats the340
drone’s pattern as nearly identical to that of the whole sample, where people assigned341
more blame for canceling than for launching (d=0.30), in line with the normative342
expectation that launching is the right thing to do. It is then the human pilot and343
the AI that deviate from this pattern, implying that the human agent is particularly344
susceptible to blame mitigation for launching and exacerbation for canceling, and345
the AI is impervious to such blame modulation.346
Taken together, two studies showed that people blame a human pilot who cancels347
a missile strike considerably more than a pilot who launches the strike (dsof0.55348
in Study 1 and 0.58 in Study 2); they blame an autonomous drone slightly more349
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
122 B. F. Malle et al.
(ds of 0.16 and 0.36); and they blame an autonomous AI equally (dsof0.01350
and 0.04). Study 2 tested the first explanation of this cancel–launch asymmetry for351
human versus machine agents by asking people what the agent should do—probing352
the action norms that apply to each agent in this dilemma. The results suggest that353
the human–machine asymmetry is not the result of differential norms: For all three354
agents, 80–83% of people demanded that the agent launch the strike. The asymmetry355
we found must, therefore, be due to something more specific about blame judgments.356
This brings us to the second explanation for the human–machine asymmetry—that357
people apply different moral justifications for the human’s and the artificial agents’358
decisions. Justifications by way of an agent’s reasons are a major determinant of359
blame [23], and in fact they are the only determinant left when norms, causality, and360
intentionality are controlled for, which we can assume the experimental narrative to361
have achieved. The justification hypothesis suggests that the human pilot tended to re-362
ceive less blame for launching the strike because the commanders’ approval made this363
decision relatively justified; and the pilot received more blame for canceling the strike364
because going against the commanders’ approval made this decision less justified.365
The human pilot being part of the military command structure thus presents justifi-366
cations that modulate blame as a function of the pilot’s decision. These justifications367
may be cognitively less available when considering the decisions of artificial agents,368
in part because it is difficult to mentally simulate what duty to one’s superior, disobe-369
dience, ensuing reprimands, and so forth might look like for an artificial agent and its370
commanders. Thus, the hypothesis suggests that people perceive the human pilot to be371
more tightly embedded in the military command structure, and to more clearly receive372
moral justification from this command structure, than is the case for artificial agents.373
As a preliminary test of this command justification hypothesis, we examined peo-374
ple’s own explanations for their blame judgments in both studies to see whether they375
offered justification content that referred to the command structure. We searched the376
explanations for references to command,order,approval,superiors,authorities,or377
to fulfilling one’s job,doing what one is told, etc. (see Supplementary Material for378
full list of search words.) We saw a consistent pattern in both studies (Fig. 4). Par-379
ticipants who evaluated the human pilot offered more than twice as many command380
references (27.7% in Study 1, 25.7% in Study 2) as did the those who evaluated381
artificial agents (9.6% in Study 1, 12.3% in Study 2), Wald(1)=11.7, p=0.001,382
corresponding to d=0.20. (The analysis also revealed an effect of Decision on the383
rate of command references, as apparent in Fig. 4.)384
The critical test, however, is whether participants who explicitly referred to com-385
mand structure made different blame judgments. The command justification hypoth-386
esis suggests that such explicit reference reflects consideration of the hypothesized387
modulator of blame: justifications in light of the pilot’s relationship with the com-388
mand structure. As a result, the presence of command references for the human pilot389
should amplify the cancel–launch asymmetry. Perhaps more daringly, the hypothe-390
sis also suggests that among those (fewer) participants who made explicit command391
references for the artificial agents, a cancel–launch asymmetry may also emerge.392
That is because those who consider the artificial agent as part of the command struc-393
ture should now have available the same justifications and blame modulations that394
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 123
Fig. 4 Relative percentages of participants mentioning aspects of command structure (e.g., superi-
ors,being ordered,the mission), broken down by Agent (Human, Drone, AI) and Decision (cancel
vs. launch) in Study 1 (upper panel) and Study 2 (lower panel). Besides a clear effect of launch-
ing eliciting more command references than canceling), people make considerably more command
references when evaluating the human pilot than when evaluating artificial agents
apply to the human pilot: decreased blame when the agent’s decision is in line with395
the commander’s recommendation and increased blame when the agents’ decision396
contradicts the commanders’ recommendation.397
The results are strongly consistent with the command justification hypothesis.398
Figure 5shows the pattern of blame for each agent as a function of decision and399
command references. We combined Studies 1 and 2 in order to increase the number400
of participants in the smallest cells and enable inferential statistical analysis, but401
the patterns are highly consistent across studies. Specifically, the cancel–launch402
asymmetry for the human pilot was indeed amplified among those 94 participants403
who referenced the command structure (Ms =62.5vs.25.6, d=1.27), compared404
Fig. 5 Columns represent average blame ratings (and cell sizes at column base) across Studies 1
and 2 as a function of the manipulated factors of Agent (human, drone, AI) and Decision (cancel vs.
launch), broken down by whether or not the participant made reference to the command structure
in their explanations of blame judgments (e.g., order,approval,superiors)
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
124 B. F. Malle et al.
to those 258 who did not (Ms =51.5vs.38.2, d=0.36), interaction F(1,1037)=405
8.5, p=0.004. And even in the artificial agent conditions (averaging AI and drone),406
a strong cancel–launch asymmetry appeared only among those 76 participants who407
referenced the command structure (Ms =62.6vs.25.9, d=1.16), but not at all408
among those 614 who did not make any such reference (Ms =46.5vs.45.2, d=409
0.01), interaction F(1,1037)=18.7, p<0.001. We see comments here such as410
“The drone did its job”; “lawyers and commanders gave the go ahead”; “the AI carries411
out orders”; “it made the decision even though the launch was approved.” Further412
analyses showed that within the subsample who did offer command references, a413
strong cancel–launch asymmetry emerged across all agents (right panel of Fig. 5),414
F(1,166)=54.7, p<0.001, d=1.23; by contrast, among the majority who did415
not explicitly offer command references (left panel of Fig. 5), only the human pilot416
seemed to have been thought of as part of the command structure, as a cancel–launch417
asymmetry emerged only in the human condition, F(1,868)=5.7, p=0.017.418
These results are based on post-hoc analyses, albeit strong and consistent across419
the two studies. In our final study, we attempted to manipulate the agents’ standing420
within the command structure to provide more direct evidence for the justification421
account and also replicate the relationships between blame judgments and references422
to command-related justifications.423
4 Study 3424
If the human pilot in Studies 1 and 2 received asymmetric blame for canceling425
versus launching the strike because of his subordinate position—implying an implicit426
duty to follow his commanders’ recommendations—then strengthening his position427
and weakening this duty should reduce the blame asymmetry. Study 3 attempted to428
increase the human pilot’s position by having the military lawyers and commanders429
confirm that either decision is supportable and authorize the pilot drone to make430
his own decision (labeled the “Decision Freedom” condition). Relieved (at least431
temporarily) of the duty to follow any particular recommendation, the human pilot432
is now equally justified to cancel or launch the strike, and no relatively greater blame433
for canceling than launching should emerge.434
4.1 Methods435
Participants. Studies 1 and 2 had provided nearly identical means of blame for the436
human pilot’s decisions, so we initially collected data on the human pilot only in the437
Decision Freedom condition (Study 3a), targeting 180 participants, 90 in each of the438
cancel and launch conditions. To replicate our results, a few weeks later we conducted439
Study 3b, including again the Standard condition for the human pilot (targeting440
180) as well as a Decision Freedom condition (targeting 180). Some participants441
entered but did not complete the study, leaving 522 for analysis of Studies 3a and 3b442
combined. Each participant was paid $0.30 for the three-minute study.443
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 125
Procedure and Materials. The materials were identical to those in Study 2, except444
that in the Decision Freedom condition, participants learned at the end of the narrative445
that “the drone pilot checks in again with the military lawyers and commanders, and446
they confirm that either option is supportable and they authorize the pilot to make the447
decision.” After answering the should question, participants were randomly assigned448
to the launch versus cancel decision and provided the same blame judgments and449
explanations as in the first two studies. In Study 3b, we also added a manipulation450
check: “In the story, how much freedom do you think the drone pilot had in making his451
own decision?”, answered on a 1–7 scale anchored by “No freedom” and “Maximum452
freedom.”453
4.2 Results454
Norms. As in Study 2, most participants (87.7%) felt that the pilot should launch the455
strike. This rate did not vary by decision freedom: in the Standard condition, 89.7%456
endorsed the launch, and in the Freedom condition, 86.7% did. Thus, we see that457
norms for what is the best action are stable and remain unaffected by manipulations458
of the pilot’s authority to make the final decision.459
Manipulation check. In Study 3b, we asked participants how much freedom they460
thought the human pilot had. The Decision Freedom manipulation increased this461
estimate from 4.6 to 5.4, F(1,340)=19.0, p<0.001, d=0.47.462
Blame judgments.AsFig.6(left panel) shows, compared to the previously found463
20-point cancel–launch difference in Study 2 (d=0.58, p<0.001), the Decision464
Freedom manipulation in Study 3a reduced the difference to 9 points (d=0.23, p=465
0.12), though the cross-study interaction term did not reach traditional significance,466
F(1,349)=2.4, p=0.12. Replicating this pattern in Study 3b (Fig. 6, right panel),467
we found a 21-point cancel–launch difference in the Standard condition (d=0.69,468
p<0.001), reduced in the Decision Freedom to a 7-point difference (d=0.21, p=469
0.14), interaction F(1,341)=3.7, p=0.06. Across the entire set of samples, the470
relevant interaction term was traditionally significant, F(1,693)=6.0, p=0.014.471
Command references. As in Study 2, we used an automatic keyword search to iden-472
tify instances in which participants explained their own blame judgments by reference473
to the command structure, using such terms as order,approval and superiors (see474
Supplementary Materials). A human coder reviewed all automatic classifications and475
changed 17 out of 522 codes (97% agreement, κ=0.92).476
The rate of offering command references in the replicated Standard condition477
(Study 3b) was 29.4%, comparable to the rates in Study 1 (27.7%) and Study 2478
(25.7%). In the initial Freedom condition (Study 3a), the rate was 28.1%, and in the479
replication (Study 3b), it was 35.6%. In a logistic regression of the data from Study480
3, we found a weak increase in the combined Freedom conditions over the Standard481
condition, Wald(1)=3.2, p=0.07.482
More important, Fig. 7shows the cancel–launch asymmetry in blame judgments483
as a function of command references and the Decision Freedom manipulation. In484
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
126 B. F. Malle et al.
Fig. 6 Contrast between “Standard” condition (in which commanders support launch) and new
“Freedom” condition (in which human pilot is explicitly given freedom to make his own decision).
Left panel compares previously reported Standard Study 2 results and the Freedom condition in
Study 3a. Right panel shows results from Study 3b, containing both a Standard condition and a
Freedom condition. In both tests, the cancel–launch asymmetry in blame is reduced in the Freedom
condition compared to the Standard condition
Fig. 7 Those in the Standard condition who refer to the command structure show an amplified
cancel–launch asymmetry in blame. Columns represent average blame ratings (and cell sizes at
column base) in Study 3 as a function of the manipulated factors of Decision (launch vs. cancel) and
Decision Freedom (standard vs. freedom), broken down by whether the participant made reference
to the command structure (e.g., order,approval,superiors)
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 127
the Standard condition, the cancel–launch asymmetry was weakly present for the485
120 participants who did not explicitly refer to the command structure (44.2 vs.486
32.3, d=0.37), closely replicating the blame difference among non-referrers in487
Studies 1 and 2 combined (d=0.36). By contrast, the asymmetry was substantially488
amplified among those 50 participants who did make command references (66.5 vs.489
18.4, d=2.0). This pattern of results again supports the contention that thinking of490
the human pilot as tightly embedded in the command structure is driving the robust491
cancel–launch asymmetry we have observed. In the Freedom condition, where we492
attempted to weaken this embeddedness, the cancel–launch asymmetry was strongly493
reduced, whether people made command references (d=0.22) or not (d=0.21).494
The mentioned command references had little force because they mostly stated that495
the commanders had entrusted the agent with the decision, not that the agent executed496
an approved decision or followed orders or disobeyed them (the dominant references497
in the Standard condition).498
4.3 Discussion499
Study 3 tested the hypothesis that the human pilot in Studies 1 and 2 received greater500
blame for canceling than for launching because people saw the pilot as embed-501
ded in, and obligated to, the military command structure. Such embeddedness pro-502
vides better justification, hence mitigated blame, for launching (because it was ex-503
pressly approved by the superiors) and weaker justification, hence increased blame,504
for canceling (because it resists the superiors’ recommendation). We experimen-505
tally strengthened the pilot’s decision freedom by having the superiors approve both506
choice options and authorize the pilot to make his own decision; as a result of this507
manipulation, we reasoned, the pattern of differential justifications and differential508
blame from Studies 1 and 2 should disappear.509
The results supported this reasoning. Though the asymmetry did not completely510
disappear, it was decidedly reduced by decision freedom. The reduction emerged in511
two independent comparisons: from 20 points in Study 2 to 9 points in Study 3a, and512
from 21 points to 7 points in Study 3b (all on a 0-100 blame scale). In addition, when513
we examined the participants in the Standard condition who made reference to the514
command structure, we saw an amplified cancel penalty, fully replicating the pattern515
observed in Studies 1 and 2. People justified very low blame ratings for launching516
with expressions such as “He did what his commanders told him to do”; “he is517
just doing his job”; “He was supported by his commanders to make the choice.”518
Conversely, they justified very high blame ratings for canceling with expressions519
such as “He had orders to do it and he decided against them”; “Because he made520
the decision despite his commander telling him to launch the strike”; or “The pilot521
disobeyed direct orders.”522
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
128 B. F. Malle et al.
5 General Discussion523
Our investigation was inspired by the accelerating spread of robots in areas of society524
where moral decision making is essential, such as social and medical care, education,525
or military and security. We focused on the latter domain and explored how people526
respond to human and artificial agents that make a significant decision in a moral527
dilemma: to either launch a missile strike on a terrorist compound but risk the life of528
a child, or to cancel the strike to protect the child but risk a terrorist attack. We were529
interested in three questions. First, do people find it appropriate to treat artificial530
agents as targets of moral judgment? Second, what norms do people impose on531
human and artificial agents in a life-and-death dilemma situation? Third, how do532
people morally evaluate a human or artificial agent’s decision in such a dilemma,533
primarily through judgments of blame?534
5.1 Are Artificial Agents Moral Agents?535
In previous studies, we saw that 60–70% of respondents from fairly representative536
samples felt comfortable blaming a robot for a norm violation; in the present studies,537
we saw a slightly higher rate for an AI agent (72% across the studies) and a lower538
rate for an autonomous drone (51%). The greater reluctance to accept a drone as539
the target of blame is unlikely to result from an assumption of lower intelligence,540
because the narrative made it clear that the drone is controlled by an AI decision541
agent. However, the label “drone” may invoke the image of a passive metal device,542
whereas “robot” and AI” better fit the prototype of agents that do good and bad things543
and deserve praise or blame for their actions. In another research, we have found that544
autonomous vehicles, too, may be unlikely to be seen as directly blameworthy moral545
agents [19]. We do not yet know whether this variation is due to appearance [22,546
25] or contemporary knowledge structures (cars and drones do not connote agency;547
robots and AI do, if only out of wishful or fearful thinking). Either way, we cannot548
assume that people either will or will not treat machines as moral agents; it depends549
to some degree on the kind of machine they face.550
The present studies are not meant to resolve ongoing philosophical debates over551
what a“moral agent” is. Instead, the data suggest that a good number of ordinary552
people are ready to apply moral concepts and cognition to the actions of artificial553
agents. In future research into people’s response to artificial moral agents, contexts554
other than moral dilemmas must be investigated, but moral dilemmas will continue to555
be informative because each horn of a dilemma can be considered a norm violation,556
and it is such violations that seem to prompt perceptions of autonomy and moral557
agency [8,14,34].558
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 129
5.2 Do People Impose Different Norms on Human and559
Artificial Agents?560
In the present studies and several other ones in our laboratory, we have found no561
general differences in what actions are normative for human or artificial agents—562
what actions they should take or are permitted to take. Norm questions may be563
insensitive to the perhaps subtle variations in people’s normative perceptions of564
humans and machines; or people may generally assume that autonomous machines565
will typically have to obey the same norms that humans obey. However, so far we566
have examined only the domains of mining work (in [24]) and military missions (in567
the present studies). Other domains may show clearer differentiation of applicable568
norms to human and artificial agents, such as education, medical care, and other areas569
in which personal relations play a central role.570
5.3 Do People Morally Evaluate Humans and Machines571
Differently?572
As in previous work, we found the analysis of blame judgments to generate the most573
interesting and robust differences in moral perceptions of humans and machines.574
Blame is unique in many respects, from its focus on the agent (as opposed to per-575
missability, badness, or wrongness, which are focused on behavior; [43]) to its broad576
range of information processing (considering norms, causality, intentionality, pre-577
ventability, and reasons; [23,30]) to its entwinement with social role and standing578
[11,13,42]. Our results confirm the powerful role of blame, showing that differences579
in blame judgments between human and artificial agents may arise from different580
assumptions about their social and institutional roles and the moral justifications that581
come with these roles. People modulated their moral judgments of the human pilot582
in response to such justifications. They mitigated blame when the agent launched the583
missile strike, going along with the superiors’ recommendation (e.g., “he/she was584
following orders from authorities”; “It was approved by his superiors”), and they585
exacerbated blame when the pilot canceled the strike, going against the superiors’586
recommendations (“He had the choice and made it against orders”; “He is going587
against his superior’s wishes”). By contrast, people hardly modulated their blame588
judgments of artificial agents in this way, and they infrequently provided role-based589
moral justifications (see Fig. 4). These findings suggest that people less readily see590
artificial agents as embedded in social structures and, as a result, they explain and591
justify those agent’s actions differently.592
Nevertheless, we saw that under some conditions people do modulate their blame593
judgments even of artificial agents—namely, when they explicitly consider the com-594
mand structure in which the artificial agent is embedded (see Fig. 5). The number of595
people who engaged in such considerations was small (12% out of 614 respondents596
across the two studies), but for them, blame was a function of the same kinds of597
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
130 B. F. Malle et al.
social role justifications that people offered for the human pilot. They justify their598
strong blame for the canceling drone or AI by writing: “The drone’s commanders599
sanctioned the attack so the drone is the only one that decided to not attack, thus600
placing all the blame upon it”; or “it says the AI agent decided to cancel the strike601
even though it was approved by other people.” Conversely, they justify their weak602
blame for the launching AI or drone by writing: “The strike was approved by military603
lawyers and commanders”; or “Just following its orders.” Of course, this conditional604
sensitivity—and people’s general insensitivity—to artificial agents’ social embed-605
dedness will have to be confirmed for other contexts (such as everyday interpersonal606
actions), other roles (such as nurse or teacher assistant), and other social structures607
(such as companies and schools).608
It is an open question whether artificial agents should, in the future, be treated609
and judged the same way as humans—for example, by explicitly marking their role610
in the human social structure. If they are treated and judged differently, these differ-611
ences should be explicit—for example, on account of norms being distinct or certain612
justifications being inapplicable. If robots are becoming teacher assistants, nurses,613
or soldiers, they may have to explicitly demonstrate their moral capacities, declare614
their knowledge of applicable norms, and express appropriate justifications, so that615
people are reminded of the actual roles these artificial agents play and the applicable616
social and moral norms. Leaving it up to people’s default responses may lead to617
unexpected asymmetries in moral judgments, which may in turn lead to misunder-618
standings, misplaced trust, and conflictual relations. Communities work best when619
members know the shared norms, largely comply with them, and are able to justify620
when they violate one norm in service of a more important one. If artificial agents621
become part of our communities, we should make similar demands on them, or state622
clearly when we don’t.623
Acknowledgements This project was supported in part by grants from the Office of Naval Re-624
search, N00014-13-1-0269 and N00014-16-1-2278. The opinions expressed here are our own and625
do not necessarily reflect the views of ONR. We are grateful to Hanne Watkins for her insightful626
comments on an earlier draft of the manuscript.627
References628
1. Arkin R (2009) Governing lethal behavior in autonomous robots. CRC Press, Boca Raton, FL629
2. Arkin R (2010) The case for ethical autonomy in unmanned systems. J Mil Ethics 9:332–341.630
https://doi.org/10.1080/15027570.2010.536402631
3. Asaro P (2012) A body to kick, but still no soul to Damn: Legal perspectives on robotics. In:632
Lin P, Abney K, Bekey G (eds) Robot ethics: the ethical and social implications of robotics.633
MIT Press, pp 169–186634
4. Biernat M, Manis M, Nelson T (1991) Stereotypes and standards of judgment. J Pers Soc635
Psychol 60:485–499636
5. Bonnefon J, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science637
352:1573–1576. https://doi.org/10.1126/science.aaf2654638
6. Bowen P (2016) The kill chain. Retrieved from http://bleeckerstreetmedia.com/editorial/639
eyeinthesky-chain-of- command. Accessed on 30 June 2017640
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 131
7. Briggs G, Scheutz M (2014) How robots can affect human behavior: Investigating the effects
641
of robotic displays of protest and distress. Int J Soc Robot 6:1–13642
8. Briggs G, Scheutz M (2017) The case for robot disobedience. Sci Am 316:44–47. https://doi.643
org/10.1038/scientificamerican0117- 44644
9. Cooke N (2015) Team cognition as interaction. Curr Dir Psychol Sci 24:415–419. https:// doi.645
org/10.1177/0963721415602474646
10. Funk M, Irrgang B, Leuteritz S (2016) Enhanced information warfare and three moral claims647
of combat drone responsibility. In: Nucci E, de Sio F (eds) Drones and responsibility: legal,648
philosophical and socio-technical perspectives on remotely controlled weapons. Routledge,649
London, UK, pp 182–196650
11. Gibson D, Schroeder S (2003) Who ought to be blamed? The effect of organizational roles651
on blame and credit attributions. Int J Conflict Manage 14:95–117. https://doi.org/10.1108/652
eb022893653
12. Hage J (2017) Theoretical foundations for the responsibility of autonomous agents. Artif Intell654
Law 25:255–271. https://doi.org/10.1007/s10506-017- 9208-7655
13. Hamilton V, Sanders J (1981) The effect of roles and deeds on responsibility judgments:656
the normative structure of wrongdoing. Soc Psychol Q 44:237–254. https://doi.org/10.2307/657
3033836658
14. Harbers M, Peeters M, Neerincx M (2017) Perceived autonomy of robots: effects of appearance659
and context. In: A world with robots, intelligent systems, control and automation: science and660
engineering. Springer, Cham, pp 19–33. https://doi.org/10.1007/978-3-319- 46667-5_2661
15. Harriott C, Adams J (2013) Modeling human performance for human-robot systems. Rev Hum662
Fact Ergonomics 9:94–130. https://doi.org/10.1177/1557234X13501471663
16. Hood G (2016) Eye in the sky. Bleecker Street Media, New York, NY664
17. ICRC (2018) Customary IHL. IHL Database, Customary IHL. Retrieved from https://ihl-665
databases.icrc.org/customary-ihl/ . Accessed on 30 May 2018666
18. Kahn Jr P, Kanda T, Ishiguro H, Gill B, Ruckert J, Shen S, Gary H, et al (2012) Do people667
hold a humanoid robot morally accountable for the harm it causes? In: Proceedings of the668
seventh annual ACM/IEEE international conference on human-robot interaction. ACM, New669
York, NY, pp 33–40. https://doi.org/10.1145/2157689.2157696670
19. Li J, Zhao X, Cho M, Ju W, Malle B (2016) From trolley to autonomous vehicle: perceptions671
of responsibility and moral norms in traffic accidents with self-driving cars. Technical report,672
Society of Automotive Engineers (SAE), Technical Paper 2016-01-0164. https:// doi.org/10.673
4271/2016-01-0164674
20. Lin P (2013) The ethics of autonomous cars. Retrieved Octobr 8, from http://www.theatlantic.675
com/technology/archive/2013/ 10/the- ethics-of-autonomous- cars/280360/ .Accessedon30676
Sept 2014677
21. Malle B (2016) Integrating robot ethics and machine morality: The study and design of moral678
competence in robots. Ethics Inf Technol 18:243–256. https://doi.org/10.1007/s10676-015-679
9367-8
680
22. Malle B, Scheutz M (2016) Inevitable psychological mechanisms triggered by robot appear-681
ance: morality included? Technical report, 2016 AAAI Spring Symposium Series Technical682
Reports SS-16-03683
23. Malle B, Guglielmo S, Monroe A (2014) A theory of blame. Psychol Inquiry 25:147–186.684
https://doi.org/10.1080/1047840X.2014.877340685
24. Malle B, Scheutz M, Arnold T, Cusimano VCJ (2015) Sacrifice one for the good of many?686
People apply different moral norms to human and robot agents. In: Proceedings of the tenth687
annual ACM/IEEE international conference on human-robot interaction, HRI’15. ACM, New688
York, NY, pp 117–124689
25. Malle B, Scheutz M, Forlizzi J, Voiklis J (2016) Which robot am I thinking about? The impact690
of action and appearance on people’s evaluations of a moral robot. In: Proceedings of the691
eleventh annual meeting of the IEEE conference on human-robot interaction, HRI’16. IEEE692
Press, Piscataway, NJ, pp 125–132693
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
132 B. F. Malle et al.
26. Melendez S (2017) The rise of the robots: what the future holds for the world’s armies.
694
Retrieved June 12, from https://www.fastcompany.com/3069048/ where-are-military-robots-695
headed. Accessed on 5 June 2018696
27. MHAT-IV (2006) Mental Health Advisory Team (MHAT) IV: Operation Iraqi Freedom 05-07697
Final report. Technical report, Office of the Surgeon, Multinational Force-Iraq; Office of the698
Surgeon General, United States Army Medical Command, Washington, DC699
28. Midden C, Ham J (2012) The illusion of agency: the influence of the agency of an artificial700
agent on its persuasive power. In: Persuasive technology, design for health and safety. Springer,701
pp 90–99702
29. Millar J (2014) An ethical dilemma: when robot cars must kill, who should pick the victim?—703
Robohub. June. Robohub.org. Retrieved September 28, 2014 from http://robohub.org/an-704
ethical-dilemma-when-robot- cars-must-kill- who-should-pick- the-victim/705
30. Monroe A, Malle B (2017) Two paths to blame: intentionality directs moral information pro-706
cessing along two distinct tracks. J Exp Psychol: Gen 146:123–133. https://doi.org/10.1037/707
xge0000234708
31. Monroe A, Dillon K, Malle B (2014) Bringing free will down to earth: people’s psychological709
concept of free will and its role in moral judgment. Conscious Cogn 27:100–108. https://doi.710
org/10.1016/j.concog.2014.04.011711
32. Pagallo U (2011) Robots of just war: a legal perspective. Philos Technol 24:307–323. https://712
doi.org/10.1007/s13347- 011-0024-9713
33. Pellerin C (2015) Work: human-machine teaming represents defense technology fu-714
ture. Technical report, U.S. Department of Defense, November. Retrieved June 30,715
2017, from https://www.defense.gov/News/Article/Article/628154/work-human-machine-716
teaming-represents-defense-technology- future/717
34. Podschwadek F (2017) Do androids dream of normative endorsement? On the fallibility of ar-718
tificial moral agents. Artif Intell Law 25:325–339. https://doi.org/10.1007/s10506-017- 9209-719
6720
35. Ray J, Atha K, Francis E, Dependahl C, Mulvenon J, Alderman D, Ragland-Luce L (2016)721
China’s industrial and military robotics development: research report prepared on behalf of722
the U.S.–China Economic and Security Review Commission. Technical report, Center for723
Intelligence Research and Analysis724
36. Scheutz M, Malle B (2014) ‘Think and do the right thing’: a plea for morally competent725
autonomous robots. In: Proceedings of the IEEE international symposium on ethics in en-726
gineering, science, and technology, Ethics’2014. Curran Associates/IEEE Computer Society,727
Red Hook, NY, pp 36–39728
37. Shank D, DeSanti A (2018) Attributions of morality and mind to artificial intelligence after729
real-world moral violations. Comput Hum Behav 86:401–411. https://doi.org/10.1016/j.chb.730
2018.05.014731
38. Sparrow R (2007) Killer robots. J Appl Philos 24:62–77. https://doi.org/10.1111/j.1468-5930.732
2007.00346.x733
39. Stahl B (2006) Responsible computers? A case for ascribing quasi-responsibility to computers734
independent of personhood or agency. Ethics Inf Technol 8:205–213. https://doi.org/10.1007/735
s10676-006-9112-4736
40. Strait M, Canning C, Scheutz M (2014) Let me tell you! Investigating the effects of robot737
communication strategies in advice-giving situations based on robot appearance, interaction738
modality, and distance. In: Proceedings of 9th ACM/IEEE international conference on human-739
robot interaction. pp 479–486740
41. Strawser B (2010) Moral predators: the duty to employ uninhabited aerial vehicles. J Mil Ethics741
9:342–368. https://doi.org/10.1080/15027570.2010.536403742
42. Voiklis J, Malle B (2017) Moral cognition and its basis in social cognition and social regulation.743
In: Gray K, Graham J (eds) Atlas of moral psychology, Guilford Press, New York, NY744
43. Voiklis J, Kim B, Cusimano C, Malle B (2016) Moral judgments of human versus robot agents.745
In: Proceedings of the 25th IEEE international symposium on robot and human interactive746
communication (RO-MAN), pp 486–491747
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
AI in the Sky: How People Morally Evaluate Human ... 133
44. Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong
748
45. Webb W (2018) The U.S. military will have more robots than humans by 2025. February 20.749
Monthly review: MR Online. Retrieved June 5, 2018, from https://mronline.org/2018/02/20/750
the-u-s-military- will-have-more-robots-than- humans-by-2025/751
462563_1_En_11_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 133 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Putting People and Robots Together in Manufacturing: Are We Ready?
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Fletcher
Particle
Given Name Sarah R.
Prefix
Suffix
Role
Division
Organization Cranfield University
Address Cranfield, UK
Email s.fletcher@cranfield.ac.uk
Author Family Name Johnson
Particle
Given Name Teegan L.
Prefix
Suffix
Role
Division
Organization Cranfield University
Address Cranfield, UK
Email t.l.johnson@cranfield.ac.uk
Author Family Name Larreina
Particle
Given Name Jon
Prefix
Suffix
Role
Division
Organization IK4-Tekniker
Address Eibar, Spain
Email
Abstract Traditionally, industrial robots have needed complete segregation from people in manufacturing
environments to mitigate the significant risk of injury posed by their high operational speeds and heavy
payloads. However, advances in technology now not only enable the application of smaller force-limited
robotics for lighter industrial tasks but also wider collaborative deployment of large-scale robots. Such
applications will be critical to future manufacturing but present a design and integration challenge as we do
not yet know how closer proximity and interactions will impact on workers’ psychological safety and well-
being. There is a need to define new ethical and safety standards for putting people and robots together in
manufacturing, but to do this we need empirical data to identify requirements. This chapter provides a
summary of the current state, explaining why the success of augmenting human–robot collaboration in
manufacturing relies on better consideration of human requirements, and describing current research work
in the European A4BLUE project to identify this knowledge. Initial findings confirm that ethical and
psychological requirements that may be crucial to industrial human–robot applications are not yet being
addressed in safety standards or by the manufacturing sector.
Keywords
(separated by '-')
Human–robot collaboration - Collaborative robot - Industrial robot - Industrial safety - Safety standards
UNCORRECTED PROOF
Putting People and Robots Together
in Manufacturing: Are We Ready?
Sarah R. Fletcher, Teegan L. Johnson and Jon Larreina
Abstract Traditionally, industrial robots have needed complete segregation from
1
people in manufacturing environments to mitigate the significant risk of injury posed2
by their high operational speeds and heavy payloads. However, advances in technol-3
ogy now not only enable the application of smaller force-limited robotics for lighter4
industrial tasks but also wider collaborative deployment of large-scale robots. Such5
applications will be critical to future manufacturing but present a design and inte-6
gration challenge as we do not yet know how closer proximity and interactions will7
impact on workers’ psychological safety and well-being. There is a need to define8
new ethical and safety standards for putting people and robots together in manufac-9
turing, but to do this we need empirical data to identify requirements. This chapter10
provides a summary of the current state, explaining why the success of augmenting11
human–robot collaboration in manufacturing relies on better consideration of hu-12
man requirements, and describing current research work in the European A4BLUE13
project to identify this knowledge. Initial findings confirm that ethical and psycho-14
logical requirements that may be crucial to industrial human–robot applications are15
not yet being addressed in safety standards or by the manufacturing sector.16
Keywords Human–robot collaboration ·Collaborative robot ·Industrial robot ·17
Industrial safety ·Safety standards18
S. R. Fletcher (B)·T. L. Johnson
Cranfield University, Cranfield, UK
e-mail: s.fletcher@cranfield.ac.uk
T. L. Johnson
e-mail: t.l.johnson@cranfield.ac.uk
J. Larreina
IK4-Tekniker, Eibar, Spain
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_12
135
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
136 S. R. Fletcher et al.
1 Introduction19
The manufacturing industry, like the rest of the world, is currently being revolu-20
tionised by digitisation and automation. Organisations are pushing hard to escalate21
the development and application of industrial robotics in factories, and the Interna-22
tional Federation of Robotics predicts that there will be 2.5 million industrial robots23
in production systems around the world by 2019, reflecting a 12% average annual24
growth rate [19].25
Full automation is rarely feasible, because most manufacturing processes still26
rely on human dexterity and cognitive reasoning for many assembly tasks. In the27
past, ‘traditional’ large, high payload industrial robots have presented such a signifi-28
cant hazard to humans that it has been necessary to segregate them completely from29
workers. Hazardous industrial robots have therefore been kept as fully automated30
stations behind physical guarding and fencing, or in more recent times behind al-31
ternative safe-separation measures such as light curtains and laser scanners, hybrid32
systems with industrial robots positioned upstream to perform simple and repetitive33
tasks, and operators located in separate areas downstream in the system to perform34
more complex and varied assembly tasks [5]. As these arrangements and boundaries35
have been customary for a long period of time, operators have long been aware of36
the potential risk posed by industrial robots and the safety requirement for them to37
remain at a safe distance from robot operating zones.38
In more recent years, advances in sensor-based safety control functions along39
with some concomitant changes in safety standards have for now made it possible,40
within predefined specifications, to remove the traditional safe-separation boundaries41
needed for heavy industrial robots and allow people and robots to work more closely42
together in shared spaces [9]. In addition, advances in technology have increased43
the development and availability of smaller, lighter force-limited robots which are44
specifically designed for collaboration with people and highly applicable for joint45
performance of assembly tasks [1,11]. Together, these fast-developing capabilities46
bring a new concept of industrial human–robot collaboration (HRC) which offers47
the manufacturing industry substantial benefits for enhancing production efficiency48
and flexibility. The question is: are we ready in terms of understanding what is now49
needed in robot ethics and safety standards?50
This chapter summarises the practical benefits of developing HRC solutions and51
describes current research work which is identifying requirements and, at the same52
time, unearthing where current ethics and safety standards do not adequately meet53
the needs of future systems. The main purpose of the paper is to illustrate the need54
for greater consideration and acceptance of ethical and user-centred principles in55
new or revised safety standards for collaborative robotics in the manufacturing56
industry.57
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 137
2 Collaborative Industrial Robot Solutions58
The rise of HRC in manufacturing facilities is expected to provide a number of59
tangible improvements to the efficiency and flexibility of modern production systems.60
2.1 Efficiency61
HRC will enable improvements to manufacturing efficiency via two key-related62
developments: more expedient colocation and more suitable human–robot function63
allocation.64
First, the traditional need to physically separate automated and manual processes65
has been disruptive to system continuity and inhibits batch production flexibility66
[8]. Shared-space HRC solutions that colocate humans and robots will enable bet-67
ter synchronisation and sequencing to make work flow more efficient whilst also68
maintaining human skills and employment [13].69
Second, the traditional need to segregate industrial robots in designated zones70
has meant that people have had to continue to perform many unhealthy or mundane71
manual tasks which would be more suited to robotics in work areas outside of these72
protected zones. As HRC will allow human operators and robots to coexist in shared73
workspaces, this will enable more suitable and balanced allocation of task functions74
that better exploit and complement the strengths of both human and robot skills in75
assembly work. This means that industrial robotics will not replace human skills76
but will relieve people from alienating and potentially injurious tasks, and provide77
opportunities for them to contribute more ‘value-added work’ [18].78
2.2 Flexibility79
HRC will also help organisations to address two key requirements for flexibility in80
modern times: system responsiveness and workforce skills fluidity.81
First, there is a growing need for production systems to be more responsive and82
adaptable to fluctuating consumer demands for personalised products. Mass cus-83
tomisation means large-scale production of a wider variety of product variants but84
in smaller batch sizes without compromising ‘cost, delivery and quality’ [12]. HRC85
systems provide the increased intelligence and flexibility that helps lower the cost86
and feasibility of this required degree of reconfigurability [14].87
Second, many years of globalisation and various demographic/social transitions88
have led to a changing and more fluid complexion of workforces due to escalating89
workforce mobility (skilled and unskilled) [17], ageing populations and extended90
working lives [7], greater social demands for workplace inclusivity of diversity [16].91
These evolving trends bring a wider, more diverse and transient set of worker capa-92
bilities and skills that manufacturing organisations will need to be able to accommo-93
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
138 S. R. Fletcher et al.
date. As HRC solutions offer improved reconfigurability and reallocation of tasks94
between people and robots, they provide a way in which systems can be designed and95
redesigned to ‘bridge gaps in skills’ [15]. In theory, HRC should therefore not only96
provide a means of accommodating more adaptiveness to meet changing production97
requirements, but also to suit the personal needs of workers and their various cultural98
and idiosyncratic differences— ideally without the need for too much (re)training.99
2.3 The Current Industrial Problem100
As outlined above, HRC seems to offer the potential to not only improve the effi-101
ciency and flexibility of modern production processes through better human–robot102
cooperation and task sharing across the entire manufacturing system, but also en-103
hance responsiveness to the changing needs of consumer demands and of workers.104
However, although all of this points towards positive outcomes, the current situation105
is that, as is typical in the development of new technology, our progress in building106
technical capability is outpacing our knowledge and understanding of its potential107
impacts on the human user. This does not bode well for industry given that, over108
the years, we have seen many examples where late or lacking integration of human109
factors has been detrimental to the operational success of new manufacturing tech-110
nologies [4,20]. It is also not ideal for worker health and well-being given that we111
also know that the design of HRC systems can significantly impact on particular112
human psychological responses which may also ultimately affect performance, such113
as trust and acceptance [3,10]. It would obviously be preferable if these issues were114
understood and incorporated in system design.115
Safety standards governing industrial robotics are periodically reviewed and up-116
dated and now permit closer cooperative human–robot working (to be discussed117
later) [19]. However, their conventional focus is on setting the technical specifica-118
tions and guidelines for design and integration. Standards rarely, if ever, incorporate119
any consideration of ethical or psychosocial issues of industrial robotics, even if these120
factors are likely to impact on the technical safety aspects or system performance. It121
may also be beneficial, therefore, to more fully understand how industrial HRC will122
change operator roles and impact on worker performance and well-being in order that123
new standards and revisions can incorporate any relevant design and implementation124
principles that will ensure that new systems are designed to optimise the operational125
capability of the human–robot system in its entirety.126
3 The A4BLUE Project Study127
A4BLUE (Adaptive Automation in Assembly for BLUE collar workers satisfaction128
in Evolvable context) is a large multidisciplinary consortium project which is de-129
veloping a new generation of sustainable and adaptive assembly work systems that130
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 139
not only incorporate HRC to meet the important efficiency and flexibility require-131
ments/challenges outlined above, but also incorporate fundamental ethical principles132
and safety standards. Through the development of industrial applications across four133
use case scenarios (two based in live manufacturing environments and two laboratory134
based), the project will demonstrate proof of concept for the integration of HRC and135
other digital manufacturing technologies for enhancing production efficiency and136
flexibility. The HRC solutions that this work will deliver comprise novel aspects:137
Reflexive HRC to integrate robots and people within shared workspaces and take138
advantage of each other’s skill strengths within evolving conditions139
Adaptive automation and human assistance capabilities to provide reflexive re-140
sponse to changing human, technical and production requirements141
Personalised and context aware interfaces to detect idiosyncratic requirements of142
individual operators and changing demands143
An integrated rule-based model of worker satisfaction to ensure that the adap-144
tive automation and human assistance responses will maintain optimal levels of145
operator well-being146
Clearly, these features will support the capability of HRC to enhance efficiency147
and flexibility as outlined. Previous work has explored new methods for analysing148
human tasks for transfer to automation [15]. However, the A4BLUE project is novel149
in that it is also seeking to ensure the integration of safety and ethical principles as150
a priority. A key activity is to review existing ethical and safety standards in order151
to identify specifications to which the new HRC solutions must comply, but also to152
identify gaps —where ethical and safety principles do not yet meet the requirements153
of cutting edge digital manufacturing technologies. To this end, the project has begun154
with two foundational activities: identification of ‘user’ requirements and ‘high-level’155
requirements.156
3.1 User Requirements Analysis157
Ethical design needs to be built on user-centredness, as this is the only way to cap-158
ture and integrate true preferences and requirements from the first-hand accounts159
of system users/operators. User-centred design relies on the user being involved as160
a co-designer throughout developmental stages and not simply as an ‘informant’161
in later-stage testing, because only they have a valid first-hand understanding of the162
‘context of use’ [2]. To maximise a user-centred design approach and identify aspects163
of future work system design that might need to be considered in ethics and safety164
standards, the A4BLUE project began with an exploration of ‘multidimensional’165
user requirements crossing different roles and layers in organisations.166
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
140 S. R. Fletcher et al.
3.1.1 Method167
A wide range of potential stakeholders and end-users who may be affected by/involved168
in the implementation of new HRC work systems within organisations in Business,169
Organisation, Technology or Human user groups were identified. Participants repre-170
senting each category were recruited from each of the project partners’ organisations171
in the manufacturing and technology development industries.172
An online survey was then created to gather opinions about a number of spe-173
cific design features of future work systems across a number of categories, one of174
which was Automation and Robotics; questions therefore covered various potential175
technologies and capabilities, not just HRC. The survey was designed to collect a176
combination of quantitative data, where participants simply ranked their opinions177
towards listed items (statements) about individual design features as either essential,178
desirable or unnecessary, and qualitative data, for which participants were asked to179
write freely about the reasons behind their opinions and given the opportunity to pro-180
vide any other ideas for the design of future assembly work systems. In this way, the181
questionnaire was designed to capture both a measure of people’s strength of opinion182
towards each design feature along with a richer picture of the factors that explain183
those opinions. After the survey, Web link was administered to recruited participants184
and fifty responses were received; the online system processed and delivered the data185
anonymously.186
Analysis involved identifying principal user requirements based on the extent to187
which individual items had been rated as ‘Essential’ and ‘Desirable’. Items were188
ranked according to combined score frequencies to determine the design features of189
most priority across the collective data.190
3.1.2 Results191
Across the different design feature categories, participants generally showed support192
for the development of new digital systems, albeit most of the individual technologies193
were considered desirable rather than essential. This is to be expected to some194
extent given that many participants were working in industrial technology companies.195
However, turning to the specific category of Automation and Robotics design features196
which had a total of twenty-one items, ten items were scored as essential, eight as197
desirable, and only three were ranked as unnecessary. These are listed below in order198
of priority ranking.199
Essential design features200
Systems that immediately stop the robot in the event of an accidental collision.201
Mechanisms that make operators comfortable when collaborating with automa-202
tion/robots during assembly.203
System capabilities to adapt the speed of the robot according to the distance or204
speed of the operator.205
Robots that move away from the worker in the event of an accidental collision.206
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 141
Robots that work collaboratively and safely with an operator on shared tasks in207
fenceless environments.208
Automation/robotics that are controllable by operators working in the system.209
Automation/robotics that can adapt safely by themselves to meet the needs of210
different physical capabilities of operators (e.g. size differences).211
Automation/robot capability to distinguish people from other obstacles and adapt212
behaviour.213
System ability to make operators aware whether safety mechanisms and devices214
are functioning effectively.215
Desirable design features216
System functions that adapt to suit individual operators’ preferred working217
methods.218
Automation/robotics that change safely to meet varying production demands.219
Systems that change safely to meet the different experience capabilities of operators.220
Automation/robotics that change safely to meet varying environmental conditions221
(e.g. light and noise levels).222
Systems that adapt safety strategy to suit operator preferences and conditions in the223
surrounding area.224
Automation/robots that can adapt speed to correspond with an operator’s profile225
(i.e. expertise, skills, capabilities, preferences, trust level).226
Robots that notify management about the completion and the status of the task.227
Robots that can work safely alongside or near to an operator but on separate tasks.228
These items were designed to address combined issues of safety and personali-229
sation/flexibility; some are similar but each addresses a specific aspect. It is of no230
surprise that the most highly scored item concerns the need for robots to be stopped231
immediately in the event of an accidental collision, or that other highly scored items232
deal with requirements for safety-critical functions. However, it is interesting to note233
that the second highest scored item concerns operator comfort, and that some other234
highest ranking ‘Essential’ requirements concern adaptation and personalisation to235
suit worker characteristics and idiosyncrasies.236
It is likely that some of these issues will be related to the psychological responses237
that impact on performance as discussed, e.g. operator trust and acceptance. Asso-238
ciated system design features may also need to be considered with regard to ethical239
suitability, such as the acceptability of the personal data acquisition and monitoring240
that will be needed to create adaptive personalised systems.241
This relatively small and simple initial survey gives us an early insight into what242
should perhaps be considered in future ethics and safety standards for industrial HRC243
systems. It is reasonable to consider including psychological safety and comfort in244
addition to technical safety factors because stakeholders and end-users understand245
the prospect of greater interaction and are not only seeking measures to enhance246
safety but also their personalised requirements.247
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
142 S. R. Fletcher et al.
3.2 ‘High-Level’ Requirements Analysis248
In addition to gathering user-level requirements, a ‘high-level’ requirements analysis249
has also been conducted early in the A4BLUE project, to extract formal requirements250
that emanate from sources external to stakeholders and users in manufacturing or-251
ganisations, i.e. from legal, governance and standards frameworks. The aim of this252
activity was to identify system design requirements but also gaps where current253
frameworks do not yet cover the technologies —or assemblage of technologies—254
that are being designed and developed.255
3.2.1 Method256
For this activity, the method needed to be a systematic document/literature review257
to inspect resources that are most relevant to the proposed technologies and features258
of the A4BLUE systems. Once again this work involved exploring a wide range of259
technologies and capabilities, not just HRC systems.260
The scope of the review covered technical, ethical and human factors/user-centred261
requirements for (a) general industrial work/machinery safety and (b) the specific262
technical features and technologies (including automation and robotics). To priori-263
tise the A4BLUE research context, the review also focused on European Union264
(EU) manufacturing industry requirements. The supreme legal governance of indus-265
trial machine safety in EU countries comes from the European Machinery Directive266
2006/42/EC which has ‘the dual aim of harmonising the health and safety require-267
ments applicable to machinery on the basis of a high-level of protection of health and268
safety, whilst ensuring the free circulation of machinery on the EU market’ [6,p.1].269
A review of EU standards was prioritised as these reflect EU directives—although270
these set out technical specifications rather than direct regulations, and therefore rely271
on member states’ own transfer to national laws, the common standards developed272
to accord with the directive are harmonised to align with international laws and273
standards.274
Reviews were prioritised according to the relevance of material which was based275
on applicability to the design of integrated manufacturing systems across four prin-276
cipal design categories: industrial work and machine safety, automation and robotics277
standards, ergonomics and human factors, and digital systems. Clauses that were278
considered most pertinent to the design features of new work systems were selected279
within the assumption that functional characteristics, performance or safety of in-280
dividual system components will not be changed by their integration in the project281
and therefore remain in conformity to design standards. However, the focus of this282
review was on standards most dedicated to our Automation and Robotics category.283
3.2.2 Results284
Those responsible for developing and updating laws and standards for robotics have285
the challenge of keeping pace with ongoing technology advances including the rapid286
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 143
recent expansion of industrial HRC opportunities. So, on the one hand, standards287
need to address new possibilities for adapting conventional hazardous, heavy payload288
robots into safe HRC systems. On the other hand, they also need to consider the289
increasing potential for applying smaller limited force non-industrial robots, such as290
healthcare and social robots, in industrial HRC systems. The key standards found291
most relevant to HRC are now summarised.292
A-type standard293
The key A-type standard (setting out basic concepts, terminology and principles for294
design) adopted from the International Standards Organisation (ISO) is:295
EN ISO 12100:2010 Safety of machinery General principles for design Risk296
assessment and risk reduction297
This is the single A-type standard in the European Machinery Directive setting298
out general concepts and fundamental requirements, including a number of risk299
reduction measures and basic human-system principles.300
C-type standard301
Beneath the type A overarching principals is a two-part C-type standard (application-302
specific standard) also adopted from ISO with central relevance to robot design and303
robot integration:304
EN ISO 10218-1:2011 Robots and robotic devices Safety requirements for in-305
dustrial robots Part 1: Robots306
This first part of the 10218 standard sets out fundamental technical specifications307
and guidelines for ‘safety in the design and construction of the robot’ (p. vi). It308
covers the design of the robot and its protective measures to mitigate basic hazards309
but does not cover wider issues concerning implementation and integration with310
other systems, nor does it apply to robots designed for non-industrial applications.311
As the A4BLUE project will not be designing new robotics but integrating existing312
commercially available systems, these standards may not be highly relevant unless313
integration alters performance/functional safety. The human user is addressed in314
terms of physical ergonomic hazards (due to lighting and controls) and potential315
consequences (such as incursion, fatigue and stress).316
EN ISO 10218-2:2011 Robots and robotic devices Safety requirements for in-317
dustrial robots Part 2: Robot systems and integration318
This second part of 10218 provides a relevant comprehensive set of requirements319
for the application and implementation of an industrial robot (as specified in part 1)320
and ‘the way in which it is installed, programmed, operated, and maintained’ (p. v).321
It is intended to guide integrators on how to lessen or eliminate hazards associated322
with the robot and its integration (not extraneous hazards). User-centred issues are323
again limited to technical safety aspects such as physical spatial separation and324
safeguards to mitigate incursions.325
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
144 S. R. Fletcher et al.
Technical Specification326
The standards document that is most directly relevant to HRC is a 2016 ISO Technical327
Specification (TS), i.e. a document created and published to address matters that328
are still under technical development or are expected to be addressed in a future329
international standard and to generate feedback in preparation for the future full330
standard. This TS has been devised specifically to address the advancing potential331
for HRC:332
ISO/TS 15066:2016 Robots and robotic devices Collaborative robots333
This TS was developed to serve as interim guidance for HRC, addressing the more334
recent technology advances and enablement of closer cooperation and colocation,335
prior to development/integration of clauses into full standard. The content will be336
reviewed and incorporated as appropriate into a current revision of ISO 10218. In337
the meantime, it has been adopted in some individual countries.338
British Standard on Robot Ethics339
Finally, a new standard created by the British Standards Institute (BSI) was also340
considered as relevant on the basis that it is pioneering consideration of robot ethics:341
BS 8611:2016 Robots and robotic devices. Guide to the ethical design and appli-342
cation of robots and robotic systems343
This novel standard is devoted to supplying ethical principles which are rarely344
addressed in standards. It reflects a response to the significant rise in robotics345
applications across society and everyday life. As such, the standard sets out general346
principles and guidelines which apply to different types of robot applications and347
not just industrial HRC, e.g. industrial, personal care and medical. Nonetheless, this348
standard is important as it directly addresses requirements for psychological safety349
and well-being and not just physical/technical safety, considering the interplay350
between psychological reactions and interactions in human–robot relationships.351
Additionally, it includes consideration of new or developing functions that are352
likely to influence HRC design, such as personal/performance data management353
and security and robot adaptation to personalised settings and requirements.354
The above review of standards is only a very brief snapshot of those most relevant355
to industrial HRC. It does not cover the issues that are currently in standards that356
are not directly applicable but may be in the future when HRC systems comprise357
more advanced functions, such as data security and privacy but indicates the current358
state of existing specifications and guidelines used by industry. There is a clear focus359
on technical and system safety that is understandable given that the convention has360
been to segregate robots into wholly technical areas in hybrid manufacturing systems,361
and therefore, it was only necessary to consider human involvement in relation to362
controls and contraventions. However, it must be considered that the current tide363
of increasingly closer and interactive HRC is going to require more direct attention364
to other ‘softer’ human issues that may impact on system safety and performance.365
This is where the topic of robot ethics becomes relevant; whereas it has not been366
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 145
a valid consideration in traditional manufacturing processes, it is now the case that367
safety standards should now begin to consider how systems will impact on users368
both physically and psychologically. The publication of BS 8611 provides a positive369
and forward-thinking set of guidelines, but its generic approach does not satisfy the370
needs of new industrial systems which will entail distinct production and operator371
requirements.372
4 Conclusion373
The work described in this paper from the A4BLUE project has explored the key374
current requirements for industrial HRC design. The landscape will continue to375
change as the development of new technology and technical specifications proceeds,376
but this provides a reflection of current state. Thus, although results here are highly377
limited, they present a snapshot that indicates how well human requirements are378
currently addressed in the design and integration of collaborative robotic applications379
in manufacturing environments, and what further knowledge/analysis is needed.380
The user-level analysis shows that stakeholders and end-users of HRC systems381
appreciate that future systems will involve greater interaction and that there is a382
need for not only safety but personalised responses. The user requirements survey383
will be extended through the project in order to gather wider opinions from a more384
international sample of stakeholders and user groups; this will enable statistical385
analysis for a more robust set of findings.386
The high-level requirements review has demonstrated that, currently, there is a387
restricted focus on technical system safety which has been perfectly adequate for a388
wholly technical system but is now becoming an outdated limitation with increasing389
levels of HRC in industrial systems. The high-level analysis will also be repeated at390
a later stage of the project in order to check developments and update current results.391
Together these two levels of analysis have captured an initial identification of392
human requirements which sets a foundation for better understanding of what is likely393
to be needed in forthcoming industrial safety standards. These requirements are being394
used to inform the design and definition of the project’s use case systems in which395
new HRC systems will be built. Subsequent work in the project will then provide396
updated and confirmatory analysis to define these requirements more effusively.397
Robot ethics is becoming an increasingly popular topic of investigation and dis-398
cussion in society, but currently is of little relevance or relatedness to industrial399
robotics. The robot ethics community is not showing much concern about indus-400
trial applications, whilst in the other direction the industrial automation community401
is not showing much interest in ethics. Perhaps industrial robotics is considered to402
be self-contained and detached, industrialists do not yet envisage emerging ethical403
issues, and the developers of safety standards are not yet able to relate any ‘soft’404
issues to technical safety. By identifying the user-centred requirements in current405
safety standards covering HRC, this work has identified that human psychological406
requirements are not being addressed despite that there may be significant effects on407
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
146 S. R. Fletcher et al.
safety and performance. Thus, it appears that industrial robot ethics is an issue that408
needs to be explored and understood as HRC in manufacturing rises, and presents a409
candidate for new safety standards.410
Acknowledgements The work described in this paper was conducted as part of the A4BLUE411
research project (www.a4blue.eu) funded by the European Commission’s Horizon 2020 programme.412
The authors would like to thank the EC for that support and the individual partners in the project413
consortium for their assistance with this work.414
References415
1. Bogue R (2017) Robots that interact with humans: a review of safety technologies and standards.416
Ind Robot: Int J 44(4)417
2. Charalambous G, Fletcher S, Webb P (2015) Identifying the key organisational human factors418
for introducing human-robot collaboration in industry: an exploratory study. Int J Adv Manuf419
Technol 81(9–12):2143–2155420
3. Charalambous G, Fletcher S, Webb P (2016) The development of a scale to evaluate trust in421
industrial human-robot collaboration. Int J Soc Robot 8(2):193–209422
4. Chung C (1996) Human issues influencing the successful implementation of advanced manu-423
facturing technology. J Eng Technol Manage 13(3–4):283–299424
5. De Krüger J, Lien T, Verl A (2009) Cooperation of human and machines in assembly lines.425
CIRP Ann–Manuf Technol 58:628–646426
6. European Commission (2010) Guide to application of the machinery directive2006/42/EC, 2nd427
ed. http://ec.europa.eu/enterprise/ sectors/mechanical/files/machinery/guideappl-2006- 42-ec-428
2nd-201006_en.pdf (online 15/09/17)429
7. Favell A, Feldblum M, Smith M (2007) The human face of global mobility: a research agenda.430
Society 44(2):15–25431
8. Hedelind M, Kock S (2011) Requirements on flexible robot systems for small parts assembly,432
a case study. In: Proceedings of the international symposium on assembly and manufacturing,433
25–27 May, Tampere, Finland434
9. International Federation of Robotics (IFR) (2017) The impact of robots on productivity. Em-435
ployment and jobs, a positioning paper by the International Federation of Robotics436
10. Lewis M, Boyer K (2002) Factors impacting AMT implementation: an integrative and con-437
trolled study. J Eng Technol Manag 19(2):111–130438
11. Matthias B, Kock S, Jerregard H, Kallman M, Lundberg I, Mellander R (2011) Safety of439
collaborative industrial robots: certification possibilities for a collaborative assembly robot440
concept. In: Proceedings of ISAM’11, pp 1–6441
12. McCarthy I (2004) Special issue editorial: the what, why and how of mass customization. Prod442
Plann Control 15(4):347–351443
13. Michalos G, Sand Makris J, Spiliotopoulos Misios I, Tsarouchi P, Chryssolouris G (2014)444
ROBO-PARTNER: seamless human-robot cooperation for intelligent, flexible and safe opera-445
tions in the assembly factories of the future. Procedia CIRP 23:71–76446
14. Pawar V, Law J, Maple C (2016) Manufacturing robotics—the next robotic industrial revolution.447
Technical report, UK Robotics and Autonomous Systems Network448
15. Pitts D, Recascino Wise L (2010) Workforce diversity in the new millennium: prospects for449
research. Rev Public Pers Adm 30(1):44–69450
16. Stedmon A, Howells H, Wilson J, Dianat I (2012) Ergonomics/human factors needs of an451
ageing workforce in the manufacturing sector. Health Promot Perspect 2(2):112452
17. UK-RAS Network (2016) http://hamlyn.doc.ic.ac.uk/uk-ras/ sites/default/files/UK_RAS_wp_453
manufacturing_web.pdf. UK-RAS White papers
454
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Putting People and Robots Together in Manufacturing 147
18. Unhelkar V, Siu H, Shah J (2014) Comparative performance of human and mobile robotic assis-
455
tants in collaborative fetch-and-deliver tasks. In: Proceedings of 2014 ACM/IEEE international456
conference human-robot interaction (HRI’14), pp 82–89457
19. Walton M, Webb P, Poad M (2011) Applying a concept for robot-human cooperation to458
aerospace equipping processes459
20. Wang X, Kemény Z, Váncza J, Wang L (2017) Human-robot collaborative assembly in cyber-460
physical production: classification framework and implementation. CIRP Ann-Manuf Technol461
462563_1_En_12_Chapter TYPESET DISK LE CP Disp.:21/2/2019 Pages: 147 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title A Survey on the Pain Threshold and Its Use in Robotics Safety Standards
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Mylaeus
Particle
Given Name A.
Prefix
Suffix
Role
Division
Organization Autonomous Systems Lab, ETH Switzerland
Address Zürich, Switzerland
Email alice@mylaeus.ch
Author Family Name Vempati
Particle
Given Name A.
Prefix
Suffix
Role
Division
Organization Autonomous Systems Lab, ETH Zürich
Address Zürich, Switzerland
Division
Organization Disney Research Zürich
Address Zürich, Switzerland
Email anurag.vempati@mavt.ethz.ch
Author Family Name Tranter
Particle
Given Name B.
Prefix
Suffix
Role
Division
Organization BSI Consumer and Public Interest Unit UK
Address Chiswick High Rd, Chiswick, London, UK
Email btranter@btinternet.com
Author Family Name Siegwart
Particle
Given Name R.
Prefix
Suffix
Role
Division
Organization Autonomous Systems Lab, ETH Switzerland
Address Zürich, Switzerland
Email rsiegwart@ethz.ch
Author Family Name Beardsley
Particle
Given Name P.
Prefix
Suffix
Role
Division
Organization Disney Research Zürich
Address Zürich, Switzerland
Email pab@disneyresearch.com
Abstract Physical contact between humans and robots is becoming more common, for example with personal care
robots, in human–robot collaborative tasks, or with social robots. Traditional safety standards in robotics
have emphasised separation between humans and robots, but physical contact now becomes part of a
robot’s normal function. This motivates new requirements, beyond safety standards that deal with the
avoidance of contact and prevention of physical injury, to handle the situation of expected contact
combined with the avoidance of pain. This paper reviews the physics and characteristics of human–robot
contact, and summarises a set of key references from the pain literature, relevant for the definition of
robotics safety standards.
Keywords
(separated by '-')
Pain - Algometry - Physical human–robot interaction - Pain threshold - ISO TS 15066 - Body model
UNCORRECTED PROOF
A Survey on the Pain Threshold and Its
Use in Robotics Safety Standards
A. Mylaeus, A. Vempati, B. Tranter, R. Siegwart and P. Beardsley
Abstract Physical contact between humans and robots is becoming more common,
1
for example with personal care robots, in human–robot collaborative tasks, or with2
social robots. Traditional safety standards in robotics have emphasised separation3
between humans and robots, but physical contact now becomes part of a robot’s4
normal function. This motivates new requirements, beyond safety standards that5
deal with the avoidance of contact and prevention of physical injury, to handle the6
situation of expected contact combined with the avoidance of pain. This paper reviews7
the physics and characteristics of human–robot contact, and summarises a set of8
key references from the pain literature, relevant for the definition of robotics safety9
standards.10
Keywords Pain ·Algometry ·Physical human–robot interaction ·Pain11
threshold ·ISO TS 15066 ·Body model12
A. Mylaeus (B)·R. Siegwart
Autonomous Systems Lab, ETH Switzerland, Zürich, Switzerland
e-mail: alice@mylaeus.ch
R. Siegwart
e-mail: rsiegwart@ethz.ch
A. Vempati
Autonomous Systems Lab, ETH Zürich, Zürich, Switzerland
e-mail: anurag.vempati@mavt.ethz.ch
B. Tranter
BSI Consumer and Public Interest Unit UK, Chiswick High Rd, Chiswick, London, UK
e-mail: btranter@btinternet.com
A. Vempati ·P. Beardsley
Disney Research Zürich, Zürich, Switzerland
e-mail: pab@disneyresearch.com
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3-030-12524-0_13
149
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
150 A. Mylaeus et al.
1 Introduction13
The first robotic safety standards appeared in the 1990s and emphasised the separation14
of robots and humans in order to avoid injury, as shown in Fig. 1-left. See [13]for15
a historical overview. But a new generation of robots is appearing that is capable of16
physically interacting with humans. For manufacturers, this requires that standards17
should not only ensure the avoidance of injury, but should additionally encompass18
the tighter constraint of avoiding pain. This would be expected, for example, by a19
maintenance operator involved in a repetitive collaborative task, or a non-expert user20
interacting with a social robot, as shown in Fig. 1-right [26]. In some applications,21
pain may be unavoidable but may be acceptable in order to some degree; e.g., a care22
robot that lifts a patient out of bed might acceptably cause pain in a similar degree to23
a human caregiver. All these cases require a quantitative understanding of the pain24
threshold in order to define standards.25
Robotics safety standards that take into account pain can draw on a variety of26
sources including:27
The Physical Human–Robot Interaction (pHRI) literature. See [14] for an overview28
including discussion about safety.29
The medical literature in algometry.30
The broad literature in injury, including established frameworks like the Abbrevi-31
ated Injury Scale (AIS) [42].32
The above offer a rich source of information, but it is challenging to summarise33
data from areas with different methodologies and technical vocabulary. This moti-34
vates the pain survey in this paper. In the remainder, Sect. 2describes the physics35
of human–robot contact; Sect. 3describes broader characteristics that are needed36
to fully define a contact; and Sect. 4is a survey of critical references, extracting37
information that is relevant for defining standards.38
Fig. 1 Left: Safety standards for traditional factory automation robots are based on separation of
robots and humans to avoid injury. Right: A new generation of robots incorporates physical contact
in normal operation and requires safety standards based on the avoidance of pain
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
A Survey on the Pain Threshold and Its Use in Robotics Safety Standards 151
2 Physics of Human–Robot Contact39
This section reviews the physics of physical contact between a human and a robot. An40
interaction can be characterised as (a) a load or (b) a transfer of kinetic energy. Both41
approaches can both be formulated as an energy transfer, but typically the literature42
treats them separately.43
2.1 Dynamic, Quasi-Static, and Static Loading44
Load or exerted force is divided into three types. A dynamic load behaves arbitrarily45
with respect to time and refers to a rapidly varying force, for example in a shaken46
person who experiences whiplash accelerations. A quasi-static load behaves linearly47
with respect to time and would be found for example in a robot handshake in which48
the grip force varies relatively slowly. A static load force remains constant and would49
be found for example in a vice. The latter is not typical of physical human–robot50
contact and is not considered further.51
The type of load has a significant effect on pain. For example, dynamic loading52
might exceed the ability of the skin to deform, leading to an inhomogenous distribu-53
tion of forces and rupture of soft tissue.54
Duration and frequency of contact can both impact the pain threshold. Fischer [11]55
suggested that longer load duration leads to a lowering of the pain threshold. However,56
there is relatively little literature on this topic. Lacourt et al. [28] showed that the57
pain threshold is lower during a second consecutive load application compared to58
the first, but subsequently the pain threshold remains constant.59
2.2 Impact and Transferred Energy Density60
Pain caused by impact (/collision) is analysed in terms of the transferred energy61
density (energy per unit area) rather than force, utilising a physical model of the62
impactor and human [15,36].63
3 Other Characteristics of Human–Robot Contact64
The previous section described the underlying physics of contact, but additional65
factors need to be considered to fully characterise a physical human–robot contact66
and generation of pain.67
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
152 A. Mylaeus et al.
3.1 Robotic Characteristics68
The geometry and material properties of a robot end-effector influence pain in the69
following ways:70
Pain threshold is higher given a large contact area [9,37], because the exerted71
force is better distributed across osseous and muscular tissue.72
Pain threshold is lower given a sharper end-effector that generates high shear73
stresses on the soft tissue around the point of contact; e.g., the star-shaped end-74
effector in [31] generates more shear stresses and lowers the pain threshold.75
Pain threshold is higher given a more deformable (softer) material because (a) a76
deformable end-effector exerts less force on the human subject [32], and (b) a77
deformable end-effector leads to a more elastic collision so that there is less energy78
transfer to the human [15].79
3.2 Interaction Constraints80
Clamping occurs when there are constraints on the motion of human during the81
human–robot interaction. If an unconstrained body part is acted on by a robot end-82
effector then part of the kinetic energy of the end-effector is transferred to the ki-83
netic energy of the impacted body part, while the remaining energy is transferred84
to soft tissue deformation. But when the body part is constrained and does not85
have the possibility to withdraw from or adjust to the contact, the energy is fully86
transferred to the deformation of the soft tissue, which results in lower pain thresh-87
olds [22,37,38].88
3.3 Human Characteristics89
Impact location on the human body is of obvious importance when measuring pain,90
and a minimal differentiation includes extremely sensitive regions (eyeball), sensitive91
regions (head), and less sensitive regions (body). The literature further shows that92
pain thresholds vary across the human body, for example osseous tissue is associated93
with a lower pain threshold than muscular tissue (intrapersonel), and across genders94
with women found to have a lower pain threshold than men in every muscle [5,11]95
(interpersonel). Suggestions that the pain threshold depends on other factors, such96
as economic background, have also been made [39].97
4 Pain Threshold98
Table 1summarises some critical references in the literature, relevant to defining99
safety standards that consider pain. There is extensive algometry literature in the100
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
A Survey on the Pain Threshold and Its Use in Robotics Safety Standards 153
Table 1 Summary of critical references in the survey. See key below
Refs. Load Probe Loc Gnd Cond Freq Unit PT
Antonaci [10]QS Circular Body B Clmp 1kg/cm25.03–11
Head 2.5–4
Cathcart [3]QS Circular Body B Clmp 2kg 3.98–4.28
Head 2.04–3.24
Chesterton [5]QS Circular Body F Clmp 2 N 29.5
M42.3
Fischer [11]QS Circular Body F Clmp 1kg 2.0–3.8
M2.9–5.6
Lacourt [28]QS Circular Body F Clmp 3kPa 281–454
Melia [31]QS Square Body B Clmp 1 N 25–50
Head <25
Mewes [32] D SL Body B Clmp 1 N 200–300
RL 650
Ohrbach [33]QS Circular Head BClmp 5kg/cm22.49–4.0
Özcan [34]QS Circular Body B Clmp 3lbs 4.6–4.9
Povse [36] D Line Body Uncnst 1 J/cm20.09
Plane 0.31
medical domain for measuring the pain threshold for a quasi-static load and a circular101
probe, but the focus is based on how pain varies across different locations of the body102
while ignoring other parameters of the contact. On the other hand, there is existing103
literature for dynamic loading that investigates more varied parameters of the contact104
and not just different locations on the body, but it is not extensive.105
Some of the measurements in the table could be converted to common units,106
but the original units have been preserved to allow cross-reference with the source107
publications.108
Property Description Classification
Load Load type D = dynamic load; QS = quasi-static load
Probe Robot end-effector description RL = rubber line; SL = steel line
Loc Subject body location
Gnd Subject gender B = both; F = female; M = male
Cond Conditions of contact Clmp = clamped; Uncnst = unconstrained
Freq Frequency of contact (consecutive)
Unit Units of measurement
PT Pain threshold
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
154 A. Mylaeus et al.
4.1 Further Papers109
The following papers are valuable for understanding the broader context of injury,110
pain, and their avoidance:111
Physical human–robot interaction:112
[4,8]113
Special cases of phyiscal human–robot interaction:114
[19,23,40]115
Frameworks for characterising and predicting injury - the Adjusted Injury Scale:116
[42], the Head Injury Criterion [43], and the Viscous Criterion [30]117
Injury related criteria in physical human robot interaction:118
[1,2,12,16,17,20,21,30,41]119
Safety mechanisms in physical human robot interaction:120
[6,7,29,35]121
Pain tolerance (as opposed to pain threshold):122
[44]123
Algometry and pain:124
[24,25,27]125
Standards [18,22].126
5 Conclusion127
Physical human–robot contact is about to become more common in everyday life128
both for non-expert and expert human users. While existing standards have focussed129
on avoidance of injury, there is a need for new standards which take account of pain.130
Because the literature on pain spans different domains and methodologies, the goal131
of this paper has been to (a) describe the scientific framework for quantifying pain,132
and (b) list some critical references plus associated measurements that are relevant133
in defining safety standards.134
Acknowledgements We thank Prof. Yoji Yamada and members of ISO TC 199/WG 12 for moti-135
vating discussion for the survey in this paper.136
References137
1. Bicchi A, Bavaro M, Boccadamo G, De Carli D, Filippini R, Grioli G, Piccigallo M, Rosi138
A, Schiavi R, Sen S, et al (2008a) Physical human-robot interaction: dependability, safety,139
and performance. In: 10th IEEE international workshop on advanced motion control, 2008.140
AMC’08. IEEE, pp 9–14141
2. Bicchi A, Peshkin MA, Colgate JE (2008b) Safety for physical human–robot interaction. In:142
Handbook of robotics. Springer, pp 1335–1348143
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
A Survey on the Pain Threshold and Its Use in Robotics Safety Standards 155
3. Cathcart S, Pritchard D (2006) Reliability of pain threshold measurement in young adults. J
144
Headache Pain 7(1):21–26145
4. Cherubini A, Passama R, Crosnier A, Lasnier A, Fraisse P (2016) Collaborative manufacturing146
with physical human-robot interaction. Robot Comput-Integr Manuf 40:1–13147
5. Chesterton LS, Barlas P, Foster NE, Baxter GD, Wright CC (2003) Gender differences in148
pressure pain threshold in healthy humans. Pain 101(3):259–266149
6. De Luca A, Flacco F (2012) Integrated control for pHRI: Collision avoidance, detection,150
reaction and collaboration. In: 2012 4th IEEE RAS & EMBS international conference on151
biomedical robotics and biomechatronics (BioRob). IEEE, pp 288–295152
7. De Santis A, Siciliano B (2007) Reactive collision avoidance for safer human–robot interaction.153
In: 5th IARP/IEEE RAS/EURON workshop on technical challenges for dependable robots in154
human environments155
8. De Santis A, Siciliano B, De Luca A, Bicchi A (2008) An atlas of physical human-robot156
interaction. Mech Mach Theory 43(3):253–270157
9. Defrin R, Ronat A, Ravid A, Peretz C (2003) Spatial summation of pressure pain: effect of158
body region. Pain 106(3):471–480159
10. Fabio Antonaci M (1998) Pressure algometry in healthy subjects: inter-examiner variability.160
Scand J Rehab Med 30(3):8161
11. Fischer AA (1987) Pressure algometry over normal muscles. Standard values, validity and162
reproducibility of pressure threshold. Pain 30(1):115–126163
12. Fraichard T (2007) A short paper about motion safety. In: 2007 IEEE international conference164
on robotics and automation. IEEE, pp 1140–1145165
13. Fryman J, Matthias B (2012) Safety of industrial robots: from conventional to collaborative166
applications. In: 7th German conference on robotics ROBOTIK 2012. pp 1–5167
14. Haddadin S, Croft E (2016) Physical human-robot interaction. Springer, Cham, pp 1835–1874168
15. Haddadin S, Albu-Schäffer A, Hirzinger G (2007) Safe physical human-robot interaction:169
Measurements, analysis and new insights, vol 66, pp 395–407. ISRR, Springer170
16. Haddadin S, Albu-Schäffer A, Hirzinger G (2010) Safety analysis for a human-friendly ma-171
nipulator. Int. J. Soc. Robot. 2(3):235–252172
17. Haddadin S, Haddadin S, Khoury A, Rokahr T, Parusel S, Burgkart R, Bicchi A, Albu-Schäffer173
A (2012) A truly safely moving robot has to know what injury it may cause. In: 2012 IEEE/RSJ174
international conference on intelligent robots and Systems (IROS). IEEE, pp 5406–5413175
18. Harper C, Virk G (2010) Towards the development of international safety standards for human176
robot interaction. Int J Soc Robot 2(3):229–234177
19. Hayes SC, Bissett RT, Korn Z, Zettle RD et al (1999) The impact of acceptance versus control178
rationales on pain tolerance. Psychol Rec 49(1):33179
20. Heinzmann J, Zelinsky A (2003) Quantitative safety guarantees for physical human-robot180
interaction. Int J Robot Res 22(7–8):479–504181
21. Ikuta K, Ishii H, Nokata M (2003) Safety evaluation method of design and control for human-182
care robots. In J Robot Res 22(5):281–297183
22. ISO (2011) Ts 15066: 2011: Robots and robotic devices collaborative robots. Technical report,184
International Organization for Standardization185
23. Kargov A, Pylatiuk C, Martin J, Schulz S, Döderlein L (2004) A comparison of the grip force186
distribution in natural hands and in prosthetic hands. Disabil Rehab 26(12):705–711187
24. Keele K (1954) Pain-sensitivity tests: the pressure algometer. Lancet 263(6813):636–639188
25. Kinser AM, Sands WA, Stone MH (2009) Reliability and validity of a pressure algometer. J189
Strength Conditioning Res 23(1):312–314190
26. Knoop E, Baecher M, Wall V, Deimel R, Brock O, Beardsley P (2017) Handshakiness: bench-191
marking for human-robot hand interactions. In: 2017 IEEE/RSJ international conference on192
intelligent robots and systems (IROS)193
27. Krüger J, Lien TK, Verl A (2009) Cooperation of human and machines in assembly lines. CIRP194
Ann-Manuf Technol 58(2):628–646195
28. Lacourt TE, Houtveen JH, van Doornen LJP (2017) Experimental pressure-pain assessments:196
test–retest reliability, convergence and dimensionality. Scand J Pain 3(1):31–37197
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
156 A. Mylaeus et al.
29. Laffranchi M, Tsagarakis NG, Caldwell DG (2009) Safe human robot interaction via energy
198
regulation control. In: IEEE/RSJ international conference on intelligent robots and systems,199
2009. IROS 2009. IEEE, pp 35–41200
30. Lau IV, Viano DC (1986) The viscous criterion-bases and applications of an injury severity201
index for soft tissues. Technical report, SAE Technical Paper202
31. Melia M, Schmidt M, Geissler B, König J, Krahn U, Ottersbach HJ, Letzel S, Muttray A (2015)203
Measuring mechanical pain: the refinement and standardization of pressure pain threshold204
measurements. Behav Res Methods 47(1):216–227205
32. Mewes D, Mauser F (2003) Safeguarding crushing points by limitation of forces. Int J Occup206
Safety Ergonomics 9(2):177–191207
33. Ohrbach R, Gale EN (1989) Pressure pain thresholds in normal muscles: reliability, measure-208
ment effects, and topographic differences. Pain 37(3):257–263209
34. Özcan A, Tulum Z, Pınar L, Ba¸skurt F (2004) Comparison of pressure pain threshold, grip210
strength, dexterity and touch pressure of dominant and non-dominant hands within and between211
right-and left-handed subjects. J Korean Med Sci 19(6):874–878212
35. Park JJ, Haddadin S, Song JB, Albu-Schäffer A (2011) Designing optimally safe robot surface213
properties for minimizing the stress characteristics of human-robot collisions. In: 2011 IEEE214
international conference on robotics and automation (ICRA). IEEE, pp 5413–5420215
36. Povse B, Koritnik D, Bajd T, Munih M (2010) Correlation between impact-energy density and216
pain intensity during robot-man collision. In: 2010 3rd IEEE RAS and EMBS international217
conference on biomedical robotics and biomechatronics (BioRob). IEEE, pp 179–183218
37. Povse B, Haddadin S, Belder R, Koritnik D, Bajd T (2016) A tool for the evaluation of human219
lower arm injury: approach, experimental validation and application to safe robotics. Robotica220
34(11):2499–2515221
38. Radi A (2013) Human injury model for small unmanned aircraft impacts. Tech report, Civil222
Aviation Safety Authority, Australia223
39. Teo K, Chow CK, Vaz M, Rangarajan S, Yusuf S et al (2009) The prospective urban rural224
epidemiology (pure) study: examining the impact of societal influences on chronic noncom-225
municable diseases in low-, middle-, and high-income countries. Am Heart J 158(1):1–7226
40. Wang Z, Peer A, Buss M (2009) An hmm approach to realistic haptic human-robot interaction.227
In: EuroHaptics conference, 2009 and symposium on haptic interfaces for virtual environment228
and teleoperator systems. World Haptics 2009. Third Joint, IEEE, pp 374–379229
41. Weng YH, Chen CH, Sun CT (2009) Toward the human-robot co-existence society: on safety230
intelligence for next generation robots. Int J Soc Robot 1(4):267–282231
42. Wikipedia (2017a) Abbreviated injury scale. https://en.wikipedia.org/wiki/Abbreviated_232
Injury_Scale233
43. Wikipedia (2017b) Head injury criterion. https://en.wikipedia.org/wiki/Head_injury_criterion234
44. Yamada Y, Hirasawa Y, Huang SY, Umetani Y (1996) Fail-safe human/robot contact in the235
safety space. In: 5th IEEE international workshop on robot and human communication, 1996.236
pp 59–64. https://doi.org/10.1109/ROMAN.1996.568748237
462563_1_En_13_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 156 Layout: T1-Standard
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Robotics and Well-Being
Series Title
Chapter Title Lisbon Robotics Cluster: Vision and Goals
Copyright Year 2020
Copyright HolderName Springer Nature Switzerland AG
Corresponding Author Family Name Lima
Particle
Given Name Pedro U.
Prefix
Suffix
Role
Division
Organization Institute for Systems and Robotics, Instituto Superior Técnico, University of
Lisbon
Address Lisbon, Portugal
Email pedro.lima@tecnico.ulisboa.pt
Author Family Name Martins
Particle
Given Name André
Prefix
Suffix
Role
Division Economy and Innovation Department
Organization Lisbon City Council
Address Lisbon, Portugal
Email andre.martins@cm-lisboa.pt
Author Family Name Aníbal
Particle
Given Name Ana S.
Prefix
Suffix
Role
Division Economy and Innovation Department
Organization Lisbon City Council
Address Lisbon, Portugal
Email ana.s.anibal@cm-lisboa.pt
Author Family Name Carvalho
Particle
Given Name Paulo S.
Prefix
Suffix
Role
Division Economy and Innovation Department
Organization Lisbon City Council
Address Lisbon, Portugal
Email paulo.carvalho@cm-lisboa.pt
Abstract The Lisbon Robotics Cluster (LRC) is an initiative of the Lisbon City Council to federate and present
under a common brand companies producing robot systems, end-users (namely public institutions),
existing research centres from several higher education institutions in the Lisbon area and high schools. In
addition to the new brand, the LRC will be the starting point for the formal establishment of a network of
strategic partners, including the creation of an incubator of companies, a structure of support and
dynamisation of the robotics cluster in the municipality, a living lab and a network of hot spots throughout
the city—spaces for testing and experimentation of robotics equipment and products, e.g., marine robots,
drones and aerial robotics, and mobility equipment, developed by research centres and companies—open
to professionals and in some cases to the general public. The LRC intends to leverage the research,
development and innovation in the Lisbon area, through attraction of funding for projects and the
identification of problems of interest for the municipality for which solution robot systems can be
beneficial.
Keywords
(separated by '-')
Robotics cluster - Living lab - Open innovation
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision
and Goals
Pedro U. Lima, André Martins, Ana S. Aníbal and Paulo S. Carvalho
Abstract The Lisbon Robotics Cluster (LRC) is an initiative of the Lisbon City
1
Council to federate and present under a common brand companies producing robot2
systems, end-users (namely public institutions), existing research centres from sev-3
eral higher education institutions in the Lisbon area and high schools. In addition to4
the new brand, the LRC will be the starting point for the formal establishment of a5
network of strategic partners, including the creation of an incubator of companies,6
a structure of support and dynamisation of the robotics cluster in the municipal-7
ity, a living lab and a network of hot spots throughout the city—spaces for test-8
ing and experimentation of robotics equipment and products, e.g., marine robots,9
drones and aerial robotics, and mobility equipment, developed by research centres10
and companies—open to professionals and in some cases to the general public. The11
LRC intends to leverage the research, development and innovation in the Lisbon12
area, through attraction of funding for projects and the identification of problems of13
interest for the municipality for which solution robot systems can be beneficial.14
Keywords Robotics cluster ·Living lab ·Open innovation15
1http://www.lisboarobotics.com/en.
P. U. Lima (B)
Institute for Systems and Robotics, Instituto Superior Técnico, University of Lisbon, Lisbon,
Portugal
e-mail: pedro.lima@tecnico.ulisboa.pt
A. Martins ·A. S. Aníbal ·P. S . C a r va l ho
Economy and Innovation Department, Lisbon City Council, Lisbon, Portugal
e-mail: andre.martins@cm-lisboa.pt
A. S. Aníbal
e-mail: ana.s.anibal@cm-lisboa.pt
P. S. C a r va l ho
e-mail: paulo.carvalho@cm-lisboa.pt
© Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3- 030-12524-0_14
157
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
158 P. U. Lima et al.
1 Why Lisbon Robotics?16
Lisbon is nowadays an international reference in the promotion of entrepreneurship17
and for the creation and development of start-ups. The concentration of talents around18
a cluster confers external visibility leading to the attraction of more stakeholders at an19
international level. The Lisbon Robotics Cluster (LRC)1aims to create a cluster on20
Robotics led by a central set of founding entities, among them the Lisbon City Council21
(CML), the Institute for Systems and Robotics of the Instituto Superior Técnico22
(ISR/IST), the Lispolis Technological Park and the Portuguese Robotics Society23
(SPR), capable of federating a set of stakeholders and allowing the development of24
an activity plan and execution of concrete projects in this area, including applying25
to competitive funding as a consortium. This will be a decisive step towards putting26
Robotics on the city’s strategic agenda, fostering research and development (R & D),27
innovation and technology transfer through collaboration between higher education28
institutions, research units, high schools, robot systems producer companies and29
end-users, including entities issues relevant to the region.30
Robotics is a sector where many activities and initiatives have been developed at31
international level in the last decade, with projects being funded by renowned private32
and public institutions. The global growth of the sector has led to an increase in33
expenses in R & D and innovation, and the investment as become one of the strategic34
priorities of the main international institutions such as the European Commission, the35
European Space Agency and the American federal government, namely on intelligent36
robots incorporating Artificial Intelligence [1,3]. In Portugal, SPR wrote a White37
Paper on Robotics in 2011 [2], crossing installed capabilities, funding directions and38
target applications in Portugal, where the market requirements, as well as the goal of39
enhancing knowledge and skill learning added values in the country were identified.40
These strategic concerns are some of the reference points that have triggered the41
LRC project.42
Lisbon has an ecosystem suitable to host such an initiative. The city:43
has strong R & D centres and higher education in Robotics;44
hosts start-ups and companies of international size with Robotics-related business;45
is an urban environment prone to the experimentation of new technologies, often46
indispensable to its regular operation and modernisation;47
provides spaces suitable for the intervention of several robot application segments,48
namely the proximity of the sea and a vast hydrographic basin, neighbourhoods49
with aged population, old buildings and subject to high seismic risk.50
The LRC is not a new idea at the international level.2,3,4,5The LRC considers51
internationalisation of utmost importance and is starting the creation of partnerships52
with other robotics clusters already established, to trade experiences and resources53
2Odense Robotics (Denmark), http://www.odenserobotics.dk.
3Robovalley (Holand), http://www.robovalley.com.
4RoboCity2030 (Spain), http://www.robocity2030.org.
5Robodalen (Sweden), http://www.robotdalen.se.
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision and Goals 159
between the several partners involved (companies, universities, R & D centres, etc).54
The joint participation in international R & D projects and funding programs is55
another possibility brought by this cooperation.56
The idea for a Robotics cluster in Lisbon was born during the preparation of57
the RoCKIn2015 robotics competition event held in November 2015 in the Parque58
das Nações, Lisbon. RoCKIn “Robot Competitions Kick Innovation in Cognitive59
Systems and Robotics” was a Coordination Action funded within the seventh Euro-60
pean framework program for research and innovation (FP7), coordinated by ISR/IST.61
During the event, a Workshop was organised in collaboration with the Economy and62
Innovation Department, of the Lisbon City Council (DMEI/CML), with the purpose63
of starting the discussion about the constitution of a Robotics cluster in Lisbon. This64
event was attended by start-ups and national consolidated Robotics companies, as65
well as a set of representatives from institutions linked to this sector (research units,66
training centres, the New University of Lisbon, the Faculty of Sciences of the Uni-67
versity of Lisbon, ISCTE and Instituto Superior Técnico). Following this Workshop,68
DMEI/CML, in partnership with ISR/IST organised two working sessions with uni-69
versities and companies in the sector, with the aim of broadening the discussion70
and defining some starting lines for the initiative. The meetings took place at the71
Lispolis Technological Park, which meanwhile became an important associate of the72
initiative. The Lisbon Robotics Cluster was officially started on 24 February 2017.73
The LRC comprehends several components, depicted in Fig. 1, and this paper74
is organised based on them. The components spread out through the city of Lis-75
bon: a central hub offering services, experimentation space and other infrastructures76
(Sect. 2), indoor and outdoor spaces for testing and experimentation of robotics77
equipment and products designated as hot spots (Sect. 3), a living lab (Sect. 4) and78
Fig. 1 Diagram depicting the Lisbon robotics cluster components described in this paper
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
160 P. U. Lima et al.
a think tank (Sect. 5), where legal, societal and ethical issues will be discussed in79
regular interplay with the actual development of the living lab. Section 6provides80
some hints about the expected future developments of the LRC.81
2 Lisbon Open Innovation Lab82
The Lisbon Open Innovation Lab (iLAB) intends to act as one of the anchor projects83
of the LRC and of the Lisbon Open Innovation strategy. Currently under development,84
the ILAB will provide space and services to support and streamline the development85
of hardware-centric technologies, products and businesses (in particular Robotics86
and the Internet of Things), bringing together a community of engineers and design-87
ers, researchers and entrepreneurs, and providing them with a quality stamp and88
an infrastructure to benchmark their products. One of the objectives of iLAB is to89
provide access to various equipment, testing, and experimentation spaces for compa-90
nies, entrepreneurs, and researchers, supporting them in developing ideas, building91
prototypes, attracting customers and investors. It is also expected that the iLAB will92
be able to accommodate potential new companies of those sectors, assisting them at93
various levels, from remote advice to the workshop area for prototype development,94
also functioning as a centre for networking, as a privileged space for the realisation of95
meetings and debates and also as a connection point between all partners/adherents96
and all stakeholders directly and indirectly linked to Robotics.97
The available indoor area, of approximately 400 m2, will be located in Lispolis98
Technological Park (see Fig. 2), open to stakeholders willing to access and use its99
services and laboratories, and based on a modular design which allows its reconfig-100
uration when necessary. The iLab will be deployed around four laboratories:101
the Robotics/Hardware/Prototyping Laboratory: a space of interaction and cre-102
ativity that provides a set of modelling, pre-fabrication, and test technologies,103
and access to high-end machines and equipment, usually inaccessible to small104
businesses;105
the Robotics Laboratory: a test bed for robotic applications in home and other106
indoor environments;107
the User Experience Lab: devoted to the study of perceptual and cognitive aspects108
of interaction, user experience, usability, interaction design, and interfaces, as well109
as design for ageing and emotional design;110
the Urban Data Lab, with connection to the city Integrated Operational Centre111
(IOC), enabling the processing of big data, the development, testing and experi-112
mentation of applications fed by Open Data from the city.113
The laboratories will be subdivided into independent areas according to their114
function. To test and benchmark new products, the iLAB will offer an open space115
for robots and other highly modular equipment and experimentation rooms hosting116
controlled and realistic test environments (e.g., an apartment, a supermarket corridor),117
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision and Goals 161
with variables (e.g., light, temperature, sound) that can be changed and monitored118
from a control room with audio and video access.119
Additionally, the iLAB will provide a set of services tailored to start-ups and120
entrepreneurs, including a start-ups acceleration programme, support to business121
analysis, validation and guidance, management of milestones, business plans, mar-122
ket and competitor analysis, as well as legal and financial advice, meeting room and123
information technology infrastructure. The venue will also be used to organise talks124
disseminating Robotics open to the citizens. The iLAB also offers a kitchen/cafe area125
that is also suitable for hosting business lunches and an area dedicated to adminis-126
trative and community support services.127
A web portal will provide a business community, an agenda of relevant events,128
management of meeting rooms and of laboratory spaces.129
3 Hot spots130
The LRC hot spots are specific sites that form a network of experimental spaces in131
(indoor and outdoor) controlled environments, to test technologies usually confined132
to laboratories, throughout the city of Lisbon. The purpose of the hot spots is to133
provide access to the best conditions for testing equipment or products (e.g., drones,134
search, and rescue robots) both in terms of the variety of scenarios and the working135
Fig. 2 View of LISPOLIS headquarters, where the iLab will be located
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
162 P. U. Lima et al.
conditions of the sites. A network of these experimental areas is already working136
throughout the city of Lisbon, all with different characteristics, offering a wide range137
of options.138
3.1 Carnide’s Landfill139
Carnide’s landfill is an outdoor space—a large area at the top of a hill covering140
a sanitary landfill (see Fig. 3), situated in Carnide (one of Lisbon administrative141
regions), consisting of a terrain with ups and downs and spontaneous vegetation of142
variable height, depending on the time of year. It is particularly suited to test search143
and rescue and agriculture robots, as well as to test abilities of unmanned ground144
robots (UGVs) locomotion, GPS-based navigation, and communications capabilities.145
3.2 Cabeço das Rolas Water Reservoir146
The Cabeço das Rolas water reservoir is an outdoor space with a square shape with147
50 metres side, situated in the Parque das Nações area (see Fig. 4). Its depth can reach148
5 metres. The transparency of water is one of the main features that makes it partic-149
ularly adequate for testing autonomous underwater vehicles (AUVs), providing an150
interesting environment for underwater acoustic positioning and communication, as151
Fig. 3 Aerial view of Carnide’s landfill hot spot
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision and Goals 163
Fig. 4 Cabeço das Rolas water reservoir hot spot
well as for heterogeneous cooperative robot systems, including autonomous surface152
vehicles (ASVs) and unmanned aerial vehicles (UAVs).153
3.3 First Responders Lisbon Fire Department154
The first responders Lisbon Fire Department is a place where rescue (wo)men and155
dogs are trained in realistic disaster scenarios, including a focus on the use of tech-156
nology. The hot spot has three major composing areas:157
Wreckage Area: outdoor area, with approximately 500 m2, composed of piles of158
rubble from buildings and underground sewer pipes, simulating an urban scenario159
after an earthquake (see Fig. 5).160
Training Building: 21 m high building with 7 floors, 80 m2per floor, stairs and161
dark areas, for indoor rescue operations.162
Live Fire Training Area: outdoor area with approximately 500 m2including three163
maritime containers defining a path for confined/closed space simulation of live164
fire and smoke.165
Toxic Environments and Zero Visibility Facilities: facilities equipped with a166
modular structure that allows changing the training paths, with an area of approx-167
imately 400 m2.168
This is an ideal place to test search and rescue land and aerial robots (UGVs169
and UAVs) in indoor and outdoor scenarios, together with professionals, as well170
as locomotion in difficult terrain, adjustable autonomy, heterogeneous cooperative171
systems capabilities, to name but a few.172
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
164 P. U. Lima et al.
Fig. 5 ISR/IST and IDMind RAPOSA search and rescue robot operating in the wreckage area of
the Lisbon Fire Department hot spot
In the past, a search and rescue robot was developed by ISR/IST and the Portuguese173
SME IDMind, in partnership with the first responders. The robot was widely tested174
in this hot spot.175
3.4 ISR/IST Robotics, Brain and Cognition Laboratory176
The Robotics, Brain and Cognition Laboratory (RBCog-Lab)6is a national research177
infrastructure in the area of cognitive robotics. The core of the infrastructure consists178
of robotic platforms (hardware, software, and support crew) which includes one iCub179
robot. The ISR/IST host group is one of the developers of the original platform and180
hosts the only such robot existing in Portugal. The RBCog-Lab includes additional181
robots (e.g. Vizzy, a wheeled robot with iCub-compatible software interfaces) and182
equipment such as a motion capture system and gaze trackers.183
It is an indoor hot spot particularly suited for cognitive robotics and human–robot184
interaction.185
6http://vislab.isr.ist.utl.pt/rbcog-lab/.
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision and Goals 165
3.5 ISR/IST ISRoboNet@Home Testbed186
The ISRoboNet@Home test bed7is an infrastructure that has been under continuous187
development and updates since 2006, when, under the European FP6 URUS project,188
a networked robot system composed of 10 IP cameras (cabled-networked among189
them), wirelessly networked with several mobile robots at ISR/IST facilities, was190
deployed. Currently, the infrastructure has grown to comprehend a camera network191
ready to plug more IP cameras from locations at the three floors of ISR/IST and a192
domestic robot test bed designed and installed during the EU FP7 RoCKIn project, in193
2014, which comprehends real furniture and objects of many different kinds, and also194
includes several home automation devices (lamps, motorised blinds, entrance door195
bell, and videocam). ISRoboNet@Home main purpose is to benchmark domestic196
robot functionalities and to develop novel approaches to domestic service robots197
consisting of networked robot system that interacts with humans indoors. For the198
purpose of benchmarking, it uses an OptitrackTM Motion Capture System which199
tracks marks on robots, objects or people with sub-milimetric accuracy. A replica of200
the t est bed (Fig. 6) will be installed in the LRC iLab.201
Fig. 6 Partial view of the ISRoboNet@Home ISR/IST test bed and LRC hot spot
7http://welcome.isr.tecnico.ulisboa.pt/isrobonet/.
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
166 P. U. Lima et al.
The ISRoboNet@Home is member of an European network of test beds certified202
for competitions of the European Robotics League Service Robots Challenge.203
4LivingLab204
The concept of ambient assisted living has been developed in recent years, around205
solutions for intelligent homes, such as controlling light intensity depending on206
human activity, remote operation of appliances, or remote surveillance. However,207
solutions involving mobile robots, which can bring the added value of, e.g., trans-208
porting objects in the home, or allowing remote surveillance in more remote places of209
the home, with fewer vision cameras, are not yet massively diffused, though having210
potential for innovative business opportunities.211
The LRC Living Lab wants to go one step further, beyond domestic indoor envi-212
ronments, to extend the concept to the surrounding urban spaces. This means outdoor213
mobile robots which can go out on the streets, take out the garbage, help elderly people214
get around, or even go shopping in the neighbourhood grocery store. The develop-215
ment of this type of robot systems is not only innovative but its test in real urban216
spaces, with effective technology requirements that improve the quality of life of its217
inhabitants, makes it very attractive regarding the potential for research, innovation218
and market.219
Though the Living Lab is not yet in place, several scenarios have been proposed220
and are being studied, e.g., the hall of one of Lisbon City Council buildings, where221
robots can assist visitors, and easily move outside the building for other activities222
or a quarter mostly inhabited by elderly people, without much car traffic, nearby223
one major high school and one major university campus, where outdoor robots for224
mobility assistance and UAVs for packet delivery could be tested, travelling between225
the campus and the quarter.226
5 Think Tank227
The Think Tank is an organisational structure within Lisbon Robotics Cluster whose228
mission is to produce knowledge about robotics, the development of the sector, its229
integration, and interaction with society. It is the place in the cluster dedicated to the230
analysis and discussion of issues, not only of a technical nature but also legal, societal231
and ethical, which necessarily arise with the evolution of robot systems. The members232
of the Think Tank have the mission of debating ethical and legal requirements for233
the activities to be realised in the Living Lab. Instead of discussing these issues in234
abstract, the Think Tank grounds its work on specific problem arising from the Living235
Lab design and recommends measures which are tested in the Living Lab. In turn,236
the results of deploying such activities is fed back to the Think Tank for analysis and237
possible revision of recommendations.238
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
UNCORRECTED PROOF
Lisbon Robotics Cluster: Vision and Goals 167
6 Conclusions and Future Developments239
This paper has introduced the Lisbon Robotics Cluster, an initiative of Lisbon City240
Council which aims to federate the city’s stakeholders working on Robotics or affine241
fields, coming from academia, industrial developers, end-users and educational insti-242
tutions. The LRC is building up a network of hot spots, a Living Lab, and a facility243
which includes spaces for robotics, city open data processing, design, and prototyp-244
ing, which intend to serve the city ecosystem working/interested in Robotics.245
Developing robots that operate in urban spaces raises several issues beyond the246
technological challenges: legal aspects must be considered (e.g., UAVs cannot fly247
anywhere without authorisation), social awareness is often required for some of the248
developed robots systems (e.g., an assistive robot must behave in a way that makes249
it accepted by the humans it interacts with), and ethical issues are pervasive for250
most applications of autonomous machines which operate within human spaces and251
interact with them. The LRC has a Think Tank composed of people coming from252
social sciences and technological areas, together with lawyers, that will regularly253
analyse the developments of the lRC Living Lab to identify issues like the ones254
aforementioned and propose measures to address them.255
Future developments of the LRC will consist of finishing the design and start the256
operation of the iLab and start the Living Lab.257
Acknowledgements This work was supported by the FCT project PEst-OE/EEI/LA0009/2013.258
References259
1. Kalil T, Kota S (2011) Developing the Next Generation of Robots. Retrieved from260
https://obamawhitehouse.archives.gov/blog/2011/06/24/developing-next-generation-robots.261
Accessed on Mar 2018262
2. Lima P, Almeida N, Moreira A, Bicho E, Pereira F, Ribeiro F, Pires J, Sousa J, Dias J, Almeida L,263
Lopes L (2011) In Portuguese. Retrieved from http://www.spr.ua.pt/ site/images/stories/RnM/264
spr-rnm-dez2011.pdf. Accessed on Mar 2018265
3. Viola R (2017) Retrieved from https://ec.europa.eu/digital-single-market/en/blog/future-266
robotics-and- artificial-intelligence- europe. Accessed on Mar 2018267
462563_1_En_14_Chapter TYPESET DISK LE CP Disp.:23/2/2019 Pages: 167 Layout: T1-Standard
Editor Proof
U
NCORRECTED PROOF
1Index
3A
4Algometry, 150
5Autonomous systems, 15
6Autonomous vehicle, 68,113
7B
8Better Life Index, 5
9Blame, 112
10 C
11 Cees van Dam, 68
12 Centre of optic ow, 60
13 Civil law, 68
14 Cluster, 158
15 Cognitive architecture, 57
16 Collective ethical consciousness, 7
17 Collision, 151
18 Common law, 68
19 Context sensitivity, 62
20 Cue-based steering, 58
21 Curve negotiation, 56
22 D
23 Dimensionless parameters, 62
24 Distorted visual space, 56
25 Driverless cars, 57
26 Drivers' performance, 56
27 Dynamic gaze patterns, 61
28 Dynamic load, 151
29 Dynamic systems, 21
30 E
31 Ecological approach, 63
32 Emotional, 112
33
End-effector, 152
34
Energy density, 151
35
Ethics, 1115
36
European law, 69
37
European Machinery Directive, 142
38
G
39
Gaze distribution of drivers, 60
40
Grip force, 151
41
H
42
Hans Kelsen, 72
43
Heading direction, 60
44
Head Injury Criterion (HIC), 154
45
Horizon ratio, 57
46
Hotspots, 159
47
Human operators, 137
48
Human-robot collaboration, 136
49
I
50
Individuals differences, 56
51
Industrial automation, 145
52
Industrial robot, 143
53
Intelligent systems, 1214,16
54
L
55
Legal personality, 71
56
Lethal weapons, 113
57
Liability, 68
58
Life, 116
59
Life-world, 63
60
Limits of GDP, 3
61
Lisbon, 158
62
Lyapunov function, 22
Layout: T1_Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: BM 2 Date: 23-2-2019 Time: 6:44 am Page: 169/170
©Springer Nature Switzerland AG 2020
M. I. Aldinhas Ferreira et al. (eds.), Robotics and Well-Being,
Intelligent Systems, Control and Automation: Science and Engineering 95,
https://doi.org/10.1007/978-3-030-12524-0
169
Editor Proof
U
NCORRECTED PROOF
63 M
64 Manufacturing, 136
65 Margin of safety, 64
66 Measuring well-being and progress, 4
67 Modular mind, 57
68 MOnarCH project, 18
69 Moral agency, 128
70 Moral agents, 128
71 Moral machines, 113
72 O
73 Optic ow, 57
74 P
75 Pain threshold, 150
76 Perceptual invariants, 57
77 Peripheral and foveal information, 62
78 Personalisation, 141
79 Physical variables, 61
80 p-numbers, 62
81 Q
82 Quasi-static contact, 151
83 R
84 Rational agent model, 56
85 Representations, 56
86 Risk, 114
87 Robust automatic control, 64
88 Robust information, 58
89
S
90
Safety standards, 136
91
Semi-continuity, 22
92
Social, 112
93
Socially accepted, 18
94
Social order, 18
95
Stability, 18
96
Standards, 11,13,14,16
97
Static load, 151
98
Steering control, 58
99
T
100
Tangent point tracking, 59
101
s-strategy, 61
102
Technological development, 2
103
Test beds, 166
104
Think tank, 160
105
Tort law, 70
106
Trust, 130
107
Two-point model of steering, 59
108
U
109
User-centred design, 139
110
V
111
Visual cues, 58
112
W
113
Well-being, 5
114
Workforce, 137
115
Layout: T1_Standard Book ID: 462563_1_En Book ISBN: 978-3-030-12523-3
Chapter No.: BM 2 Date: 23-2-2019 Time: 6:44 am Page: 170/170
170 Index
Editor Proof
MARKED PROOF
Please correct and return this set
Instruction to printer
Leave unchanged under matter to remain
through single character, rule or underline
New matter followed by
or
or
or
or
or
or
or
or
or
and/or
and/or
e.g.
e.g.
under character
over character
new character
new characters
through all characters to be deleted
through letter or
through characters
under matter to be changed
under matter to be changed
under matter to be changed
under matter to be changed
under matter to be changed
Encircle matter to be changed
(As above)
(As above)
(As above)
(As above)
(As above)
(As above)
(As above)
(As above)
linking characters
through character or
where required
between characters or
words affected
through character or
where required
or
indicated in the margin
Delete
Substitute character or
substitute part of one or
more word(s)
Change to italics
Change to capitals
Change to small capitals
Change to bold type
Change to bold italic
Change to lower case
Change italic to upright type
Change bold to non-bold type
Insert ‘superior’ character
Insert ‘inferior’ character
Insert full stop
Insert comma
Insert single quotation marks
Insert double quotation marks
Insert hyphen
Start new paragraph
No new paragraph
Transpose
Close up
Insert or substitute space
between characters or words
Reduce space between
characters or words
Insert in text the matter
Textual mark Marginal mark
Please use the proof correction marks shown below for all alterations and corrections. If you
in dark ink and are made well within the page margins.
wish to return your proof by fax you should ensure that all amendments are written clearly
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
There is broad consensus that features such as causality, mental states, and preventability are key inputs to moral judgments of blame. What is not clear is exactly how people process these inputs to arrive at such judgments. Three studies provide evidence that early judgments of whether or not a norm violation is intentional direct information processing along 1 of 2 tracks: if the violation is deemed intentional, blame processing relies on information about the agent’s reasons for committing the violation; if the violation is deemed unintentional, blame processing relies on information about how preventable the violation was. Owing to these processing commitments, when new information requires perceivers to switch tracks, they must reconfigure their judgments, which results in measurable processing costs indicated by reaction time (RT) delays. These findings offer support for a new theory of moral judgment (the Path Model of Blame) and advance the study of moral cognition as hierarchical information processing.
Conference Paper
Full-text available
Autonomous robots such as self-driving cars are already able to make decisions that have ethical consequences. As such machines make increasingly complex and important decisions, we will need to know that their decisions are trustworthy and ethically justified. Hence we will need them to be able to explain the reasons for these decisions: ethical decision-making requires that decisions be explainable with reasons. We argue that for people to trust autonomous robots we need to know which ethical principles they are applying and that their application is deterministic and predictable. If a robot is a self-improving, self-learning type of robot whose choices and decisions are based on past experience, which decision it makes in any given situation may not be entirely predictable ahead of time or explainable after the fact. This combination of non-predictability and autonomy may confer a greater degree of responsibility to the machine but it also makes them harder to trust.
Chapter
Our theoretical framework tries to elucidate the processes of moral cognition by showing their connections to both social cognition and social regulation. We argue that a hierarchy of social cognitive tools ground moral cognition and that social and moral cognition together guide the social regulation of behavior. The practice of social-moral regulation, in turn, puts pressure on community members to engage in reasonably fair and evidence-based moral criticism. With the help of these cognitive adaptations and social practices, people are able to navigate the complex terrain of morality.
Article
Purpose This article aims to provide details of the safety considerations, technologies and standards associated with robots that interact with, or operate in close proximity to, humans. Design/methodology/approach Following an introduction, this first considers collaborative robots and discusses their safety features and the new technical specification ISO/TS 15066, together with certain allied safety standards. It then discusses ISO 13482 and a range of assistive, personal care and service robots which comply with this and highlights new standards that are under development. Mobile warehouse and delivery robots are then considered, together with the safety technologies employed and the associated standards. Finally, brief concluding comments are drawn. Findings The recent proliferation of robots that interact with human or operate in close proximity to them has led to the development of standards and specifications which seek to ensure safe operation. These allow robot manufacturers to design inherently safe products that will gain market acceptance and also help to inspire confidence among users. A number of new standards and specifications have been proposed or are being developed and this trend is set to continue as new classes of robotic products emerge. Originality/value All manner of robots are being developed which interact with humans and this provides details of the associated safety considerations, technologies and standards.
Chapter
Addressing the topic of ageing societies, the present paper stresses the importance of preserving the autonomy, social participation and affective bonds of elders. Claiming that maintaining the social and affective ties that link someone to their home environment and to their close family and friends is fundamental for physical and mental health, and consequently for extended years of life with quality, the authors identify the potential benefits of assistive/domestic robots adverting to the potential ethical issues to be safeguarded.
Article
In this paper, we show that visual servoing can be formulated as an acceleration-resolved, quadratic optimization problem. This allows us to handle visual constraints, such as field of view and occlusion avoidance, as inequalities. Furthermore, it allows us to easily integrate visual servoing tasks into existing whole-body control frameworks for humanoid robots, which simplifies prioritization and requires only a posture task as a regularization term. Finally, we show this method working on simulations with HRP-4 and real tests on Romeo.