Content uploaded by Xuecai(Daniel) Xu
Author content
All content in this area was uploaded by Xuecai(Daniel) Xu on Jan 21, 2022
Content may be subject to copyright.
Vol.:(0123456789)
1 3
AI and Ethics
https://doi.org/10.1007/s43681-021-00128-2
COMMENTARY
Safety criticism andethical dilemma ofautonomous vehicles
WeiLi1· YiHuang1· ShichaoWang1· XuecaiXu2
Received: 29 October 2021 / Accepted: 12 December 2021
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021
Abstract
With the rapid commercialization of autonomous vehicles (AVs), opportunities and challenges co-exist. In the spring 2018,
the first pedestrian fatality caused by Uber driverless vehicle drew global attention, through which two critical issues,
safety criticism and ethical dilemma, emerge. In this work, safety quarrel and safety liability are investigated, and the ethi-
cal dilemma is discussed when AVs are confronted with collisions. Two reasons for the formation of the ethical dilemma
are presented, and then, the feature that AVs makes ethical selection far from real scenes is discussed. To solve the issues,
the countermeasures are put forth from the theory of deontology, utilitarianism, and machine learning approaches. Further
discussion provides three possible solutions to safety liability and ethical dilemma, respectively.
Keywords Autonomous vehicles· Safety criticism· Safety liability· Ethical dilemma· Ethical rules
1 Introduction
In recent years, autonomous vehicle (AV) technology has
been the hits of artificial intelligence area and the whole
automotive industry, driven by billions of business oppor-
tunities and markets [1]. Currently a great number of coun-
tries, regions, international automakers and the new Internet
companies have been exploring and investing AV industry
and technologies [6]. Indeed, the utilization of AVs will
bring a series of advantages, e.g., alleviating the traffic con-
gestion, reducing traffic crashes, and fatalities [16, 20], and
mitigating the environmental burden, etc. However, in spring
2018, the first pedestrian fatality caused by Uber driverless
vehicle in Tempe, Arizona, attracted global attention, and
the doubt whether the driverless cars are safe and who is
responsible for the fault has aroused again.
Shown from this event, two critical issues can be high-
lighted, safety criticism and ethical dilemma. When traffic
crashes or fatalities occur, it is necessary to investigate how
to determine the injury severity levels, and who is respon-
sible for the crash, the manufacturer, the owner, or the gov-
ernmental administration department. Nowadays, the AVs’
R & D and on-site testing have been growing vigorously all
over the world, and the commercialization is expected to be
realized in the near future. Faced with the pressing situation,
it is of significance to tackle the safety criticisms and ethical
dilemma of AVs so as to propose some new thoughts and
solutions. Therefore, the purpose of this work is to inves-
tigate the safety criticism and ethical dilemma of AVs, so
that corresponding countermeasures can be taken before the
commercialization of AVs.
The contribution of this work lies in that based on safety
criticism, safety quarrel and safety liability are investigated,
and by analyzing the ethical formation, concrete counter-
measures are proposed. Liability assignment, contractari-
anism alternatives, and personalized mode of humanitarian
ethics are presented to solve the practical issues, which pro-
vides some potential insights on safety and ethics of AVs.
* Xuecai Xu
xuecai_xu@hust.edu.cn
Wei Li
wei21wei@hust.edu.cn
Yi Huang
m202175147@hust.edu.cn
Shichao Wang
m202175143@hust.edu.cn
1 School ofEducation, Huazhong University ofScience
andTechnology, Wuhan430074, China
2 School ofCivil andHydraulic Engineering, Huazhong
University ofScience andTechnology, Wuhan430074,
China
AI and Ethics
1 3
2 Safety criticisms forAVs
Since the driverless concept was presented by general motors,
it has been a hot spot. Vehicles equipped with automated
driving systems are defined as “autonomous,” “driverless,” or
“self-driving”, but according to US National Highway Traffic
Safety Administration (NHTSA) and SAE International (for-
merly the Society of Automotive Engineers), the real automa-
tion of vehicles includes 5 levels, namely level 0-no automa-
tion, level 1-driver assistance, level 2-partial automation, level
3-conditional automation, level 4-high automation, and level
5-full automation [6, 11]. AVs mentioned here are referred
to level 4 and level 5, whose main feature is that the system
can control everything without human response. Currently, the
technologies used by most of the public or private enterprises
have not gained the truly autonomous driving except a few;
therefore, it is understandable that there exists some disagree-
ment and criticisms about the safety of AVs.
2.1 Safety quarrel
2.1.1 AVs will be safe enough toavoid crash
As a matter of fact, in the consumers’ mind, AVs with all the
high-techs are safe enough to avoid all crashes. However,
revealed from the pedestrian fatality of Uber’s self-driving
vehicle, the system did not detect the pedestrian until the dis-
tance of 114m (traveling at 43m/h (19m/s) only 6s), although
all types of radar, cameras, and sensors have been installed to
provide a 360° virtual view of the environment surrounding
the vehicle. This tragedy may be caused by software malfunc-
tion, hardware failure, or system errors. It can be displayed
apparently that the current AV technologies have not been so
perfect to avoid crashes; hence, it is hard to determine that AVs
are safe enough to avoid crash.
To say the least, even if the AVs can detect other AVs with-
out human drivers, or other vehicles with mixed traffic, is it
adequate to say that the sophisticated AVs would be safer than
the human driver? Unless the AVs are certified within specific
zones, some crashes may be avoidable under this condition,
but if the system tends to reach that objective, it is neither
likely nor near-term under the current conditions. In present,
a large number of companies and enterprises are still testing
on private routes, and it will take years to drive freely on road-
ways as human drivers by responding to other vehicles, pedes-
trians, bicyclists, motorcyclists, and the environment.
2.1.2 AVs will minimize thedamage atall times.
During the traveling course, AVs need to make each deci-
sion every second, but when the crashes happen, which side
will the AVs rush to, or stop to minimize the damage [2]?
This may cause the ethical “trolley problem”, which will be
discussed later in detail. Nevertheless, the damage caused
depends on the computer programming of the whole AV
system. If the designed system decides to maximize overall
safety while minimizing the damage of certain side, it would
be considered as unfair, that is to say, if the vehicles’ fatali-
ties are reduced, but cyclists’ fatalities are increased, even
the overall safety improvement might be unacceptable to
the society.
In our society, maximizing life-saving should be the pre-
ferred option [5]. However, different scenarios rely on dif-
ferent situations, which may produce different value trends.
Some care about the passengers inside the vehicles, and
some are concerned with the bicyclists or pedestrians, while
others care about the elders and children. If the ultimate goal
is to save lives and improve safety, AVs should behave in an
acceptable way by most of the society, which is significant
to earn the public’s trust for the AVs’ technology.
2.1.3 AVs will be programmed toabide bythelaw
Ever since the AV technology was introduced into transpor-
tation area, thousands of new companies have been driven
to join in the R& D teams during the last decades, while
the existing laws have not covered all the details of AVs’
situation completely due to the emerging market [19]. How-
ever, different countries and regions have issued correspond-
ing regulations and on-site testing rules to standardize the
AVs market, because the rising industry will bring up more
potential opportunities.
Apparently, this is not enough. Traveling on roadways
may encounter various situations, and each situation requires
corresponding solution. If the laws were regulated to cover
the vast majority of traffic situations, and translated into
computer language understood by the system of AVs, all
problems would be dealt with, whereas the current laws have
not been so close to these standards. On the other hand, laws
can be compensated for these traffic situations, but it will
take years to complete the process and realize computer-
understandable terms from different regulations and rules
with different standards. Consequently, it is difficult to deter-
mine that AVs are all programmed to abide by the law.
2.2 Safety liability
When AVs run into crashes, who is going to take the respon-
sibility? Some consider that the owners of AVs should take
more responsibility for the fault; in this way, the burden of
manufacturers would be alleviated, which may help invest
more capitals and fasten the technology progress [14]. As
suggested by Duffy and Hopkins [4], once the owners buy
the AVs, the proprietary rights and liability would be trans-
ferred to the owners because AVs, like dogs, can think and
AI and Ethics
1 3
act independently against human owners, which may cause
physical injury or property damage. This can be regulated by
laws; as your dog hurts other people, the owner should pay for
the fault and take strict liability. Similarly, whether the owner
makes the mistake subjectively or not, he should take the
responsibility as long as his AV causes the fault. In this way,
although the strict liability is transferred from the manufac-
turer to the owner, the insurance fees will be increased, which
may aggravate the owner’s burden, but this method may help
remove the liability evaluation issues of AVs, and benefits the
lawsuit efficiency in court. Moreover, this may encourage the
manufacturers to apply new technologies actively and make
the AVs serve the society better.
On the contrary, some scholars think that the manufacturers
should take more responsibility [9]. Hubbard [10] considered
that the manufacturers and retailers should take the respon-
sibility of providing training opportunities for AV drivers,
while the owners have the liability to grasp essential knowl-
edge and skills, and to avoid hurting others. However, due to
the improved automation of AVs, the cause of crashes would
rely more on the vehicle performance and less on the drivers’
behavior. Therefore, the liability assignment may transfer from
the owners to the manufacturers and retailers because of their
product deficiency. Similar comments by Marchant and Lindor
[15] highlighted that if the AVs run into crashes, the vehicle
manufacturers, component manufacturers, and even software
engineers may take responsibilities, while at most cases, the
vehicle manufacturers should take more responsibility for the
final product. In China, the AVs may involve two types, with or
without drivers. If driverless AVs run into crashes, it belongs
to product deficiency, which in principle the manufacturers and
retailers should take the liability; whereas the crashes happen
to AVs with drivers, the owners should take main responsibil-
ity, while the manufacturers and retailers take the secondary.
In this way, the liability may be clearer.
In sum, the safety liability is still controversial, but it is
noteworthy that when the responsibility is discussed during
crashes, the manufacturers, retailers, and the owners are the
main subjects taking the responsibilities, while the AVs can-
not be considered as real liability subjects presently; on the
other hand, so far, the responsibility taken has been regarded
as the technical issue [17], but the ethics liability involved in
AVs crashes can be extended from the planning to design and
operation; thus, the discussion about AVs ethics may be more
generalized, which leads to the main focus of the following
sections.
3 Ethical dilemma
3.1 Formation
When AVs are in driverless conditions, each decision is
required to be made by software and hardware programs,
but when confronted with some difficult situations, the
ethical dilemma will occur. Typically, the dilemma is clas-
sified into two types, trolley problem and tunnel problem
[8]. The former focuses on which object to hit when AVs
run into crashes, while the latter concentrates on whether
AVs choose to protect the passengers inside or human
beings outside. Either of these needs to be solved before
AVs are commercialized completely, whereas the pre-
requisite is that AVs have to make ethical choices. Theo-
retically, AVs can avoid making ethical decisions in two
ways, one is to realize the goal of zero crash and the other
is to transfer the driving rights to human drivers before
crashes. However, the two ways are not easy to implement
in practice.
On one hand, the collision is unavoidable. As stated pre-
viously, although some technical optimists consider that
AV technology development can deal with ethical problem
and realize zero crash, so that the ethical selection will be
avoided before crashes occur. As a matter of fact, it is dif-
ficult to avoid all the crashes; one reason is that the cur-
rent detection system still has some limitations, and the
other is that even though the AVs are equipped with the
most advanced and most reliable detecting, monitoring, and
computing systems as Uber driverless vehicle, all-of-sudden
pedestrians or other vehicles still cannot be avoided under
the dynamic and complex driving environment.
On the other hand, it is difficult to switch between
human driver and machine. When AVs detect that there
is a high probability to run into crashes, the driving rights
are switched to human drivers, through which the ethical
selection can be solved, but there exists some issues. First,
if the human drivers are required to intervene, the drivers
have to prepare for the switching at any moment; thus, the
original intention of AVs will be deviated; second, when
faced with emergent situations, it is doubtful for human
drivers to respond within limited time to take over the vehi-
cles, and may lead to tragic consequences, as the airplane
tragedy happened to Air France in 2009 [13]. Finally, even
the human drivers take over under emergent situations, it is
hard to make the best decision, because in reality, the human
drivers often take wrong operation when confronting with
emergency. Obviously, the action over human drivers from
ethical selection does not conform to the developing trend of
AVs, and is not beneficial for the optimized results.
Another factor of influencing the dilemma is the deci-
sion of real scenes. As for human drivers, when the
AI and Ethics
1 3
collision occurs, they can make quick decisions within
limited time and with certain driving information; thus, we
cannot expect them to make the optimal selection within
short time and in the conditions of much stress, so even the
consequence is worse, it can be accepted ethically; How-
ever, the decision made by AVs has been escaped from
this stressful scene, since they are pre-programmed with a
large number of driving scenarios, and they are capable of
making reasonable decision spontaneously, that is the rea-
son that they are highly expected by human beings. This
requires AVs to make ethical selection and even the pro-
grammers may take responsibility for the collision. There-
fore, the feature, AVs can determine the real scenes with
all kinds of situations before anything happens, makes AVs
confront ethical dilemma directly and takes more respon-
sibilities. One work by Wiseman and Grinberg [21] had
attempted a technique for real-time evaluation of potential
damages for AVs with the best decision, which provides
potential insights on ethical dilemma.
3.2 Countermeasures
To deal with the two issues above, different measures are
required to interpret the dilemma. In present, the coun-
termeasures mainly include two attempts, building ethi-
cal rules oriented top–down with deontological ethics
and utilitarianism, and establishing ethical rules oriented
down–top with machine learning. These measures can help
construct a series of effective ethical rules from different
aspects to instruct the ethical selection of AVs.
3.2.1 Deontology
In deontology, the legitimacy of behavior comes from
whether the behavior complies with rules of ethics, so
AVs should abide by certain codes. Three codes of AVs
are presented as follows:
Code 1: AVs must not collide with pedestrians and
bicyclists.
Code 2: AVs should not collide with other vehicles
unless Code 1 is conflicted.
Code 3: AVs can’t collide with other objects unless
Code 1 and 2 are conflicted.
The three codes enlighten the programming setting of
AVs, but the deficiency is obvious, e.g., the situation of
“tunnel problem” is not considered within the three codes,
which may cause the passengers to be hurt to avoid col-
liding with the pedestrians; Moreover, “other vehicles” in
Code 2 and “other objects” in Code 3 are not specified but
too general, causing the codes lack of consensus.
3.2.2 Utilitarianism
The main idea of utilitarianism is to determine from the
behavior consequence whether it confirms to ethics, and the
behavior can be accepted as long as it can make most of peo-
ple obtain the maximum happiness and minimum damage
[18]. According to utilitarianism, ethical selection dilemma
can be regarded as one computable math problem, i.e., if the
sorrow brought by crashes to everyone is the same, appar-
ently avoiding most of the people’s lives lost is the optimal
result. Compared to the deontology, utilitarianism is more
specific, and the optimal scheme can be obtained as long as
different results are assigned by comparison. Although utili-
tarianism provides clear solution, two issues are left behind:
one is that it ignores the individual’s justice. For example,
when one AV has no choice but hit one motorcyclist with or
without helmet, the motorcyclist with helmet might become
the object; the other is that damage caused by collision may
not be computable. Because there is no standard criterion to
evaluate, the damage to different people is different, and the
calculation is difficult, since each individual is unique, and
the calculation may not reflect the real situation.
3.2.3 Machine learning
Machine learning, one of artificial intelligence theories, is
one solution from bottom to top as AV system conforms to
the machine learning structure. By observing human’s driv-
ing behavior and habits, the ethical codes can be figured
out, and then, a set of decision-making models can be cre-
ated mimicking human drivers’ behavior. One of the most
famous examples is the AlphaGO, which can generate the
moves that human chess players have never made. How-
ever, in transportation engineering, if AVs make unpredicted
moves, much threatening will be brought for traffic safety.
Nevertheless, the bottom-to-top learning procedure may not
be consistent with ethical codes, because human driving
rely on the real dynamic and complicated traffic conditions,
while ethical codes learned by machine are the descriptive
rules, not the real conditions, e.g., random lane-changing
may influence the effect of machine learning (Fig.1). Conse-
quently, machine learning needs to be fed back and adjusted
continuously until the so-called right situation is reached,
but when is the right situation still remains to be answered.
4 Further discussion
Generally speaking, most of the current studies about safety
and ethics issues of AVs are more based on idealistic condi-
tions, e.g., pre-assumed knowability of environmental infor-
mation and acceptance of ethical codes, but in reality, the
situations are way more complicated; thus, more practical
AI and Ethics
1 3
questions should be considered for safety and ethics before
the commercialization of AVs.
4.1 Liability assignment
How the safety liability between the AV manufacturers and
owners is assigned is the real question for the manufactur-
ers, the consumers, and the government. As for the com-
ponent manufacturers, it is commonly accepted that AVs
should abide by the programming set by the manufacturers,
while the vehicle manufacturers, responsible for the crashes,
can foresee possible problems and take possible measures.
However, exceeded burden of responsibility may reduce the
enthusiasm of the manufacturers’ investing and producing
AVs. On the other hand, the drivers or passengers during
the crashes become into the responsible party just because
they did not take any irregular actions, which is apparently
not fair for them. From this point of view, driverless driving
should not be considered as individual behavior, but the col-
lective action under the certain rules. Therefore, whether the
drivers or passengers should be responsible for the crashes
depends on the “ethical luck” when the crashes occur, and
this luck may not be controllable for individuals, but as long
as the drivers do not follow the “collective action” rules,
they need to take the responsibility.
The government, as the main body, should take scien-
tific and proper planning and monitoring mechanism for the
whole AV industry. First, precautionary principle should be
insisted, and all the uncertainties and possible risks should
be predicted in advance; Second, the government should
make the wholesome planning for the AVs’ developing and
strategic layout, establish the safety criteria and regulations,
and control the AVs’ R & D and application macroscopi-
cally with different approaches, such as financial funding,
administrative polices, market competition mechanism, etc.
[3]. Third, the government should encourage different types
of enterprises to participate in AV industry and take different
roles; in this way, the whole AV market may become com-
petitive and prosperous, which benefits every party.
4.2 Contractarianism alternative
Each ethical selection contains certain ethical value trend,
which is hard to form into satisfactory solution. To solve the
ethics value conflict of different subjects, contractarianism
can be considered as one alternative [12], which considers
that all the rules’ results should satisfy the individuals’ inter-
ests as much as possible and make them sign the contracts
voluntarily to reveal the equilibrium, equivalence, and mutu-
ality. Specifically, in the norm setting of AV ethics codes,
contractarianism offers main subjects with corresponding
value basis for the government, manufacturers, drivers/pas-
sengers, pedestrians, and non-motorized drivers. All of them
reach the consensus voluntarily and abide by the contracts
based on his own benefits, which can be regarded as rea-
sonable ethics norm. On the other side, unaccepted ethics
rules should also be built up as the common consent, not
individually, and if the action on ruining others or collective
benefits is taken, obligatory and mutual ethics punishment
will be conducted.
4.3 Personalized mode ofhumanitarian ethics
The humanitarian ethics developed by Fromm [7] consid-
ered that the only standard of ethics value lied in the human
happiness, and value judgment was rooted in individual
uniqueness, which is meaningful when this is relevant to
human existence. Therefore, the solution can consider the
personalized mode: on one hand, various selections based on
different values should be provided when programmers code
with the decision programs, so public participation and dem-
ocratic decision-making should be introduced, e.g., import-
ing of Delphi approach, collective anonymous exchange of
ideas in form of letter or inquiry; On the other hand, the
Fig. 1 Main framework of work
Ethical
Dilemma
Countermeasures
Safety
Criticism
Formation
Discussion
&
Suggestion
Safety Quarrel
Safety Liability
Injury
by AVs
Avoid crash?
Minimize damage?
Abide by law?
Deontology
Utilitarianism
Machine learning
AI and Ethics
1 3
buyers or owners may choose different programs according
to their own personalities, as well as corresponding subjec-
tive initiative and difference, so the buyers or owners should
take corresponding professional training or class studying
to understand the design process and ethical issues of AVs.
With these practical questions solved, safety and ethics
of autonomous vehicles, combined with the three counter-
measures above, can be properly tackled before the com-
mercialization of AVs. When AVs are put into practice, each
party should follow the contract codes, and recognize its
responsibility and liability so as to avoid and safety criticism
and ethical dilemma.
At last, in this work, safety criticism and ethical dilemma
of AVs are discussed to reflect the safety quarrel and safety
liability, and ethical formation and countermeasures are pro-
posed, respectively. Liability assignment, contractarianism
alternatives, and personalized mode of humanitarian ethics
are presented to solve the practical issues. In the next step,
more simulation and testing of AVs can be investigated to
enrich the whole system theoretically and practically.
Author contributions All authors contributed to the study conception
and design. Material preparation, data collection and analysis were
performed by [YH], [SW], and [XX]. The first draft of the manuscript
was written by [WL] and all authors commented on previous versions
of the manuscript. All authors read and approved the final manuscript.
Declarations
Conflict of interests The authors have declared that no competing in-
terests exist.
References
1. Bagloee, S.A., Tavana, M., Asadi, M., Oliver, T.: Autonomous
vehicles: challenges, opportunities, and future implications for
transportation policies. J. Mod. Transp. 24(4), 284–303 (2016)
2. Bonnefon, J.F., Shariff, A., Rahwan, I.: The social dilemma of
autonomous vehicles. Science 352(6293), 1573–1576 (2016)
3. Du, Y.: On the moral responsibility in robot ethics. Stud. Sci. Sci.
35(11), 1608–1613 (2017)
4. Duffy, S., Hopkins, J.: Sit, stay, drive: the future of autonomous
car liability. SMU Sci. Technol. Law Rev. 16, 453–480 (2013)
5. Fagnant, D.J., Kockelman, K.: Preparing a nation for autonomous
vehicles: opportunities, barriers and policy recommendations.
Transp. Res. Part A Policy Pract. 77, 167–181 (2015)
6. Fleetwood, J.: Public health, ethics, and autonomous vehicles.
Am. J. Public Health 107(4), 532–537 (2017)
7. Fromm, E.: Man for himself. Ethics 59–60 (1947)
8. He, H.: An analysis of the ethical dilemma, causes and counter-
measures of driverless vehicles. Stud. Dialectics Nat. 33, 58–62
(2017)
9. Hevelke, A., Nida-Rümelin, J.: Responsibility for crashes of
autonomous vehicles: an ethical analysis. Sci. Eng. Ethics 21,
619–630 (2015)
10. Hubbard, P.: “Sophisticated robots”: balancing liability, regula-
tion, and innovation. Fla. Law Rev. 66, 1803–1872 (2014)
11. Koopman, P., Wagner, M.: Autonomous vehicle safety: an inter-
disciplinary challenge. IEEE Intell. Transp. Syst. Mag. 9(1),
90–96 (2017)
12. Leben, D.: A Rawlsian algorithm for autonomous vehicles. Ethics
Inf. Technol. 19(2), 107–115 (2017)
13. Lipson, H., Kuman, M.: Driverless: Intelligent Cars and the Road
Ahead, pp. 57–58. MIT Press, Cambridge (2016)
14. Liu, H.Y.: Irresponsibilities, inequalities, and injustice for autono-
mous vehicles. Ethics Inf. Technol. 19, 193–207 (2017)
15. Marchant, G., Lindor, R.: The coming collision between autono-
mous vehicles and the liability system. Santa Clara Law Rev. 52,
1321–1340 (2012)
16. Martinho, A., Herber, N., Kroesen, M., Chorus, C.: Ethical issues
in focus by the autonomous vehicles industry. Transp. Rev. (2021).
https:// doi. org/ 10. 1080/ 01441 647. 2020. 18623 55
17. Nunes, A., Reimer, B., Coughlin, J.F.: People must retain control
of autonomous vehicles. Nature 556, 169–171 (2018)
18. Pereira, R.H.M., Schwanen, T., Banister, D.: Distributive justice
and equity in transportation. Transp. Rev. 37(2), 170–191 (2017)
19. Santoni de Sio, F.: Killing by autonomous vehicles and the legal
doctrine of necessity. Ethical Theory Moral Pract. 20, 411–429
(2017)
20. Sparrow, R., Howard, M.: When human beings are like drunk
robots: driverless vehicles, ethics, and the future of transport.
Transp. Res. Part C Emerg. Technol. 80, 206–215 (2017)
21. Wiseman, Y., Grinberg, I.: Circumspectly crash of autonomous
vehicles. In: Proceedings of IEEE International Conference on
Electro Information Technology (EIT 2016), pp. 387–392. Grand
Forks, North Darkota, USA (2016).
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
A preview of this full-text is provided by Springer Nature.
Content available from AI and Ethics
This content is subject to copyright. Terms and conditions apply.