ArticlePDF Available

Safety criticism and ethical dilemma of autonomous vehicles

Authors:

Abstract and Figures

With the rapid commercialization of autonomous vehicles (AVs), opportunities and challenges co-exist. In the spring 2018, the first pedestrian fatality caused by Uber driverless vehicle drew global attention, through which two critical issues, safety criticism and ethical dilemma, emerge. In this work, safety quarrel and safety liability are investigated, and the ethical dilemma is discussed when AVs are confronted with collisions. Two reasons for the formation of the ethical dilemma are presented, and then, the feature that AVs makes ethical selection far from real scenes is discussed. To solve the issues, the countermeasures are put forth from the theory of deontology, utilitarianism, and machine learning approaches. Further discussion provides three possible solutions to safety liability and ethical dilemma, respectively.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
AI and Ethics
https://doi.org/10.1007/s43681-021-00128-2
COMMENTARY
Safety criticism andethical dilemma ofautonomous vehicles
WeiLi1· YiHuang1· ShichaoWang1· XuecaiXu2
Received: 29 October 2021 / Accepted: 12 December 2021
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021
Abstract
With the rapid commercialization of autonomous vehicles (AVs), opportunities and challenges co-exist. In the spring 2018,
the first pedestrian fatality caused by Uber driverless vehicle drew global attention, through which two critical issues,
safety criticism and ethical dilemma, emerge. In this work, safety quarrel and safety liability are investigated, and the ethi-
cal dilemma is discussed when AVs are confronted with collisions. Two reasons for the formation of the ethical dilemma
are presented, and then, the feature that AVs makes ethical selection far from real scenes is discussed. To solve the issues,
the countermeasures are put forth from the theory of deontology, utilitarianism, and machine learning approaches. Further
discussion provides three possible solutions to safety liability and ethical dilemma, respectively.
Keywords Autonomous vehicles· Safety criticism· Safety liability· Ethical dilemma· Ethical rules
1 Introduction
In recent years, autonomous vehicle (AV) technology has
been the hits of artificial intelligence area and the whole
automotive industry, driven by billions of business oppor-
tunities and markets [1]. Currently a great number of coun-
tries, regions, international automakers and the new Internet
companies have been exploring and investing AV industry
and technologies [6]. Indeed, the utilization of AVs will
bring a series of advantages, e.g., alleviating the traffic con-
gestion, reducing traffic crashes, and fatalities [16, 20], and
mitigating the environmental burden, etc. However, in spring
2018, the first pedestrian fatality caused by Uber driverless
vehicle in Tempe, Arizona, attracted global attention, and
the doubt whether the driverless cars are safe and who is
responsible for the fault has aroused again.
Shown from this event, two critical issues can be high-
lighted, safety criticism and ethical dilemma. When traffic
crashes or fatalities occur, it is necessary to investigate how
to determine the injury severity levels, and who is respon-
sible for the crash, the manufacturer, the owner, or the gov-
ernmental administration department. Nowadays, the AVs’
R & D and on-site testing have been growing vigorously all
over the world, and the commercialization is expected to be
realized in the near future. Faced with the pressing situation,
it is of significance to tackle the safety criticisms and ethical
dilemma of AVs so as to propose some new thoughts and
solutions. Therefore, the purpose of this work is to inves-
tigate the safety criticism and ethical dilemma of AVs, so
that corresponding countermeasures can be taken before the
commercialization of AVs.
The contribution of this work lies in that based on safety
criticism, safety quarrel and safety liability are investigated,
and by analyzing the ethical formation, concrete counter-
measures are proposed. Liability assignment, contractari-
anism alternatives, and personalized mode of humanitarian
ethics are presented to solve the practical issues, which pro-
vides some potential insights on safety and ethics of AVs.
* Xuecai Xu
xuecai_xu@hust.edu.cn
Wei Li
wei21wei@hust.edu.cn
Yi Huang
m202175147@hust.edu.cn
Shichao Wang
m202175143@hust.edu.cn
1 School ofEducation, Huazhong University ofScience
andTechnology, Wuhan430074, China
2 School ofCivil andHydraulic Engineering, Huazhong
University ofScience andTechnology, Wuhan430074,
China
AI and Ethics
1 3
2 Safety criticisms forAVs
Since the driverless concept was presented by general motors,
it has been a hot spot. Vehicles equipped with automated
driving systems are defined as “autonomous,” “driverless,” or
“self-driving”, but according to US National Highway Traffic
Safety Administration (NHTSA) and SAE International (for-
merly the Society of Automotive Engineers), the real automa-
tion of vehicles includes 5 levels, namely level 0-no automa-
tion, level 1-driver assistance, level 2-partial automation, level
3-conditional automation, level 4-high automation, and level
5-full automation [6, 11]. AVs mentioned here are referred
to level 4 and level 5, whose main feature is that the system
can control everything without human response. Currently, the
technologies used by most of the public or private enterprises
have not gained the truly autonomous driving except a few;
therefore, it is understandable that there exists some disagree-
ment and criticisms about the safety of AVs.
2.1 Safety quarrel
2.1.1 AVs will be safe enough toavoid crash
As a matter of fact, in the consumers’ mind, AVs with all the
high-techs are safe enough to avoid all crashes. However,
revealed from the pedestrian fatality of Uber’s self-driving
vehicle, the system did not detect the pedestrian until the dis-
tance of 114m (traveling at 43m/h (19m/s) only 6s), although
all types of radar, cameras, and sensors have been installed to
provide a 360° virtual view of the environment surrounding
the vehicle. This tragedy may be caused by software malfunc-
tion, hardware failure, or system errors. It can be displayed
apparently that the current AV technologies have not been so
perfect to avoid crashes; hence, it is hard to determine that AVs
are safe enough to avoid crash.
To say the least, even if the AVs can detect other AVs with-
out human drivers, or other vehicles with mixed traffic, is it
adequate to say that the sophisticated AVs would be safer than
the human driver? Unless the AVs are certified within specific
zones, some crashes may be avoidable under this condition,
but if the system tends to reach that objective, it is neither
likely nor near-term under the current conditions. In present,
a large number of companies and enterprises are still testing
on private routes, and it will take years to drive freely on road-
ways as human drivers by responding to other vehicles, pedes-
trians, bicyclists, motorcyclists, and the environment.
2.1.2 AVs will minimize thedamage atall times.
During the traveling course, AVs need to make each deci-
sion every second, but when the crashes happen, which side
will the AVs rush to, or stop to minimize the damage [2]?
This may cause the ethical “trolley problem”, which will be
discussed later in detail. Nevertheless, the damage caused
depends on the computer programming of the whole AV
system. If the designed system decides to maximize overall
safety while minimizing the damage of certain side, it would
be considered as unfair, that is to say, if the vehicles’ fatali-
ties are reduced, but cyclists’ fatalities are increased, even
the overall safety improvement might be unacceptable to
the society.
In our society, maximizing life-saving should be the pre-
ferred option [5]. However, different scenarios rely on dif-
ferent situations, which may produce different value trends.
Some care about the passengers inside the vehicles, and
some are concerned with the bicyclists or pedestrians, while
others care about the elders and children. If the ultimate goal
is to save lives and improve safety, AVs should behave in an
acceptable way by most of the society, which is significant
to earn the public’s trust for the AVs’ technology.
2.1.3 AVs will be programmed toabide bythelaw
Ever since the AV technology was introduced into transpor-
tation area, thousands of new companies have been driven
to join in the R& D teams during the last decades, while
the existing laws have not covered all the details of AVs’
situation completely due to the emerging market [19]. How-
ever, different countries and regions have issued correspond-
ing regulations and on-site testing rules to standardize the
AVs market, because the rising industry will bring up more
potential opportunities.
Apparently, this is not enough. Traveling on roadways
may encounter various situations, and each situation requires
corresponding solution. If the laws were regulated to cover
the vast majority of traffic situations, and translated into
computer language understood by the system of AVs, all
problems would be dealt with, whereas the current laws have
not been so close to these standards. On the other hand, laws
can be compensated for these traffic situations, but it will
take years to complete the process and realize computer-
understandable terms from different regulations and rules
with different standards. Consequently, it is difficult to deter-
mine that AVs are all programmed to abide by the law.
2.2 Safety liability
When AVs run into crashes, who is going to take the respon-
sibility? Some consider that the owners of AVs should take
more responsibility for the fault; in this way, the burden of
manufacturers would be alleviated, which may help invest
more capitals and fasten the technology progress [14]. As
suggested by Duffy and Hopkins [4], once the owners buy
the AVs, the proprietary rights and liability would be trans-
ferred to the owners because AVs, like dogs, can think and
AI and Ethics
1 3
act independently against human owners, which may cause
physical injury or property damage. This can be regulated by
laws; as your dog hurts other people, the owner should pay for
the fault and take strict liability. Similarly, whether the owner
makes the mistake subjectively or not, he should take the
responsibility as long as his AV causes the fault. In this way,
although the strict liability is transferred from the manufac-
turer to the owner, the insurance fees will be increased, which
may aggravate the owner’s burden, but this method may help
remove the liability evaluation issues of AVs, and benefits the
lawsuit efficiency in court. Moreover, this may encourage the
manufacturers to apply new technologies actively and make
the AVs serve the society better.
On the contrary, some scholars think that the manufacturers
should take more responsibility [9]. Hubbard [10] considered
that the manufacturers and retailers should take the respon-
sibility of providing training opportunities for AV drivers,
while the owners have the liability to grasp essential knowl-
edge and skills, and to avoid hurting others. However, due to
the improved automation of AVs, the cause of crashes would
rely more on the vehicle performance and less on the drivers’
behavior. Therefore, the liability assignment may transfer from
the owners to the manufacturers and retailers because of their
product deficiency. Similar comments by Marchant and Lindor
[15] highlighted that if the AVs run into crashes, the vehicle
manufacturers, component manufacturers, and even software
engineers may take responsibilities, while at most cases, the
vehicle manufacturers should take more responsibility for the
final product. In China, the AVs may involve two types, with or
without drivers. If driverless AVs run into crashes, it belongs
to product deficiency, which in principle the manufacturers and
retailers should take the liability; whereas the crashes happen
to AVs with drivers, the owners should take main responsibil-
ity, while the manufacturers and retailers take the secondary.
In this way, the liability may be clearer.
In sum, the safety liability is still controversial, but it is
noteworthy that when the responsibility is discussed during
crashes, the manufacturers, retailers, and the owners are the
main subjects taking the responsibilities, while the AVs can-
not be considered as real liability subjects presently; on the
other hand, so far, the responsibility taken has been regarded
as the technical issue [17], but the ethics liability involved in
AVs crashes can be extended from the planning to design and
operation; thus, the discussion about AVs ethics may be more
generalized, which leads to the main focus of the following
sections.
3 Ethical dilemma
3.1 Formation
When AVs are in driverless conditions, each decision is
required to be made by software and hardware programs,
but when confronted with some difficult situations, the
ethical dilemma will occur. Typically, the dilemma is clas-
sified into two types, trolley problem and tunnel problem
[8]. The former focuses on which object to hit when AVs
run into crashes, while the latter concentrates on whether
AVs choose to protect the passengers inside or human
beings outside. Either of these needs to be solved before
AVs are commercialized completely, whereas the pre-
requisite is that AVs have to make ethical choices. Theo-
retically, AVs can avoid making ethical decisions in two
ways, one is to realize the goal of zero crash and the other
is to transfer the driving rights to human drivers before
crashes. However, the two ways are not easy to implement
in practice.
On one hand, the collision is unavoidable. As stated pre-
viously, although some technical optimists consider that
AV technology development can deal with ethical problem
and realize zero crash, so that the ethical selection will be
avoided before crashes occur. As a matter of fact, it is dif-
ficult to avoid all the crashes; one reason is that the cur-
rent detection system still has some limitations, and the
other is that even though the AVs are equipped with the
most advanced and most reliable detecting, monitoring, and
computing systems as Uber driverless vehicle, all-of-sudden
pedestrians or other vehicles still cannot be avoided under
the dynamic and complex driving environment.
On the other hand, it is difficult to switch between
human driver and machine. When AVs detect that there
is a high probability to run into crashes, the driving rights
are switched to human drivers, through which the ethical
selection can be solved, but there exists some issues. First,
if the human drivers are required to intervene, the drivers
have to prepare for the switching at any moment; thus, the
original intention of AVs will be deviated; second, when
faced with emergent situations, it is doubtful for human
drivers to respond within limited time to take over the vehi-
cles, and may lead to tragic consequences, as the airplane
tragedy happened to Air France in 2009 [13]. Finally, even
the human drivers take over under emergent situations, it is
hard to make the best decision, because in reality, the human
drivers often take wrong operation when confronting with
emergency. Obviously, the action over human drivers from
ethical selection does not conform to the developing trend of
AVs, and is not beneficial for the optimized results.
Another factor of influencing the dilemma is the deci-
sion of real scenes. As for human drivers, when the
AI and Ethics
1 3
collision occurs, they can make quick decisions within
limited time and with certain driving information; thus, we
cannot expect them to make the optimal selection within
short time and in the conditions of much stress, so even the
consequence is worse, it can be accepted ethically; How-
ever, the decision made by AVs has been escaped from
this stressful scene, since they are pre-programmed with a
large number of driving scenarios, and they are capable of
making reasonable decision spontaneously, that is the rea-
son that they are highly expected by human beings. This
requires AVs to make ethical selection and even the pro-
grammers may take responsibility for the collision. There-
fore, the feature, AVs can determine the real scenes with
all kinds of situations before anything happens, makes AVs
confront ethical dilemma directly and takes more respon-
sibilities. One work by Wiseman and Grinberg [21] had
attempted a technique for real-time evaluation of potential
damages for AVs with the best decision, which provides
potential insights on ethical dilemma.
3.2 Countermeasures
To deal with the two issues above, different measures are
required to interpret the dilemma. In present, the coun-
termeasures mainly include two attempts, building ethi-
cal rules oriented top–down with deontological ethics
and utilitarianism, and establishing ethical rules oriented
down–top with machine learning. These measures can help
construct a series of effective ethical rules from different
aspects to instruct the ethical selection of AVs.
3.2.1 Deontology
In deontology, the legitimacy of behavior comes from
whether the behavior complies with rules of ethics, so
AVs should abide by certain codes. Three codes of AVs
are presented as follows:
Code 1: AVs must not collide with pedestrians and
bicyclists.
Code 2: AVs should not collide with other vehicles
unless Code 1 is conflicted.
Code 3: AVs can’t collide with other objects unless
Code 1 and 2 are conflicted.
The three codes enlighten the programming setting of
AVs, but the deficiency is obvious, e.g., the situation of
“tunnel problem” is not considered within the three codes,
which may cause the passengers to be hurt to avoid col-
liding with the pedestrians; Moreover, “other vehicles” in
Code 2 and “other objects” in Code 3 are not specified but
too general, causing the codes lack of consensus.
3.2.2 Utilitarianism
The main idea of utilitarianism is to determine from the
behavior consequence whether it confirms to ethics, and the
behavior can be accepted as long as it can make most of peo-
ple obtain the maximum happiness and minimum damage
[18]. According to utilitarianism, ethical selection dilemma
can be regarded as one computable math problem, i.e., if the
sorrow brought by crashes to everyone is the same, appar-
ently avoiding most of the people’s lives lost is the optimal
result. Compared to the deontology, utilitarianism is more
specific, and the optimal scheme can be obtained as long as
different results are assigned by comparison. Although utili-
tarianism provides clear solution, two issues are left behind:
one is that it ignores the individual’s justice. For example,
when one AV has no choice but hit one motorcyclist with or
without helmet, the motorcyclist with helmet might become
the object; the other is that damage caused by collision may
not be computable. Because there is no standard criterion to
evaluate, the damage to different people is different, and the
calculation is difficult, since each individual is unique, and
the calculation may not reflect the real situation.
3.2.3 Machine learning
Machine learning, one of artificial intelligence theories, is
one solution from bottom to top as AV system conforms to
the machine learning structure. By observing human’s driv-
ing behavior and habits, the ethical codes can be figured
out, and then, a set of decision-making models can be cre-
ated mimicking human drivers’ behavior. One of the most
famous examples is the AlphaGO, which can generate the
moves that human chess players have never made. How-
ever, in transportation engineering, if AVs make unpredicted
moves, much threatening will be brought for traffic safety.
Nevertheless, the bottom-to-top learning procedure may not
be consistent with ethical codes, because human driving
rely on the real dynamic and complicated traffic conditions,
while ethical codes learned by machine are the descriptive
rules, not the real conditions, e.g., random lane-changing
may influence the effect of machine learning (Fig.1). Conse-
quently, machine learning needs to be fed back and adjusted
continuously until the so-called right situation is reached,
but when is the right situation still remains to be answered.
4 Further discussion
Generally speaking, most of the current studies about safety
and ethics issues of AVs are more based on idealistic condi-
tions, e.g., pre-assumed knowability of environmental infor-
mation and acceptance of ethical codes, but in reality, the
situations are way more complicated; thus, more practical
AI and Ethics
1 3
questions should be considered for safety and ethics before
the commercialization of AVs.
4.1 Liability assignment
How the safety liability between the AV manufacturers and
owners is assigned is the real question for the manufactur-
ers, the consumers, and the government. As for the com-
ponent manufacturers, it is commonly accepted that AVs
should abide by the programming set by the manufacturers,
while the vehicle manufacturers, responsible for the crashes,
can foresee possible problems and take possible measures.
However, exceeded burden of responsibility may reduce the
enthusiasm of the manufacturers’ investing and producing
AVs. On the other hand, the drivers or passengers during
the crashes become into the responsible party just because
they did not take any irregular actions, which is apparently
not fair for them. From this point of view, driverless driving
should not be considered as individual behavior, but the col-
lective action under the certain rules. Therefore, whether the
drivers or passengers should be responsible for the crashes
depends on the “ethical luck” when the crashes occur, and
this luck may not be controllable for individuals, but as long
as the drivers do not follow the “collective action” rules,
they need to take the responsibility.
The government, as the main body, should take scien-
tific and proper planning and monitoring mechanism for the
whole AV industry. First, precautionary principle should be
insisted, and all the uncertainties and possible risks should
be predicted in advance; Second, the government should
make the wholesome planning for the AVs’ developing and
strategic layout, establish the safety criteria and regulations,
and control the AVs’ R & D and application macroscopi-
cally with different approaches, such as financial funding,
administrative polices, market competition mechanism, etc.
[3]. Third, the government should encourage different types
of enterprises to participate in AV industry and take different
roles; in this way, the whole AV market may become com-
petitive and prosperous, which benefits every party.
4.2 Contractarianism alternative
Each ethical selection contains certain ethical value trend,
which is hard to form into satisfactory solution. To solve the
ethics value conflict of different subjects, contractarianism
can be considered as one alternative [12], which considers
that all the rules’ results should satisfy the individuals’ inter-
ests as much as possible and make them sign the contracts
voluntarily to reveal the equilibrium, equivalence, and mutu-
ality. Specifically, in the norm setting of AV ethics codes,
contractarianism offers main subjects with corresponding
value basis for the government, manufacturers, drivers/pas-
sengers, pedestrians, and non-motorized drivers. All of them
reach the consensus voluntarily and abide by the contracts
based on his own benefits, which can be regarded as rea-
sonable ethics norm. On the other side, unaccepted ethics
rules should also be built up as the common consent, not
individually, and if the action on ruining others or collective
benefits is taken, obligatory and mutual ethics punishment
will be conducted.
4.3 Personalized mode ofhumanitarian ethics
The humanitarian ethics developed by Fromm [7] consid-
ered that the only standard of ethics value lied in the human
happiness, and value judgment was rooted in individual
uniqueness, which is meaningful when this is relevant to
human existence. Therefore, the solution can consider the
personalized mode: on one hand, various selections based on
different values should be provided when programmers code
with the decision programs, so public participation and dem-
ocratic decision-making should be introduced, e.g., import-
ing of Delphi approach, collective anonymous exchange of
ideas in form of letter or inquiry; On the other hand, the
Fig. 1 Main framework of work
Ethical
Dilemma
Countermeasures
Safety
Criticism
Formation
Discussion
&
Suggestion
Safety Quarrel
Safety Liability
Injury
by AVs
Avoid crash?
Minimize damage?
Abide by law?
Deontology
Utilitarianism
Machine learning
AI and Ethics
1 3
buyers or owners may choose different programs according
to their own personalities, as well as corresponding subjec-
tive initiative and difference, so the buyers or owners should
take corresponding professional training or class studying
to understand the design process and ethical issues of AVs.
With these practical questions solved, safety and ethics
of autonomous vehicles, combined with the three counter-
measures above, can be properly tackled before the com-
mercialization of AVs. When AVs are put into practice, each
party should follow the contract codes, and recognize its
responsibility and liability so as to avoid and safety criticism
and ethical dilemma.
At last, in this work, safety criticism and ethical dilemma
of AVs are discussed to reflect the safety quarrel and safety
liability, and ethical formation and countermeasures are pro-
posed, respectively. Liability assignment, contractarianism
alternatives, and personalized mode of humanitarian ethics
are presented to solve the practical issues. In the next step,
more simulation and testing of AVs can be investigated to
enrich the whole system theoretically and practically.
Author contributions All authors contributed to the study conception
and design. Material preparation, data collection and analysis were
performed by [YH], [SW], and [XX]. The first draft of the manuscript
was written by [WL] and all authors commented on previous versions
of the manuscript. All authors read and approved the final manuscript.
Declarations
Conflict of interests The authors have declared that no competing in-
terests exist.
References
1. Bagloee, S.A., Tavana, M., Asadi, M., Oliver, T.: Autonomous
vehicles: challenges, opportunities, and future implications for
transportation policies. J. Mod. Transp. 24(4), 284–303 (2016)
2. Bonnefon, J.F., Shariff, A., Rahwan, I.: The social dilemma of
autonomous vehicles. Science 352(6293), 1573–1576 (2016)
3. Du, Y.: On the moral responsibility in robot ethics. Stud. Sci. Sci.
35(11), 1608–1613 (2017)
4. Duffy, S., Hopkins, J.: Sit, stay, drive: the future of autonomous
car liability. SMU Sci. Technol. Law Rev. 16, 453–480 (2013)
5. Fagnant, D.J., Kockelman, K.: Preparing a nation for autonomous
vehicles: opportunities, barriers and policy recommendations.
Transp. Res. Part A Policy Pract. 77, 167–181 (2015)
6. Fleetwood, J.: Public health, ethics, and autonomous vehicles.
Am. J. Public Health 107(4), 532–537 (2017)
7. Fromm, E.: Man for himself. Ethics 59–60 (1947)
8. He, H.: An analysis of the ethical dilemma, causes and counter-
measures of driverless vehicles. Stud. Dialectics Nat. 33, 58–62
(2017)
9. Hevelke, A., Nida-Rümelin, J.: Responsibility for crashes of
autonomous vehicles: an ethical analysis. Sci. Eng. Ethics 21,
619–630 (2015)
10. Hubbard, P.: “Sophisticated robots”: balancing liability, regula-
tion, and innovation. Fla. Law Rev. 66, 1803–1872 (2014)
11. Koopman, P., Wagner, M.: Autonomous vehicle safety: an inter-
disciplinary challenge. IEEE Intell. Transp. Syst. Mag. 9(1),
90–96 (2017)
12. Leben, D.: A Rawlsian algorithm for autonomous vehicles. Ethics
Inf. Technol. 19(2), 107–115 (2017)
13. Lipson, H., Kuman, M.: Driverless: Intelligent Cars and the Road
Ahead, pp. 57–58. MIT Press, Cambridge (2016)
14. Liu, H.Y.: Irresponsibilities, inequalities, and injustice for autono-
mous vehicles. Ethics Inf. Technol. 19, 193–207 (2017)
15. Marchant, G., Lindor, R.: The coming collision between autono-
mous vehicles and the liability system. Santa Clara Law Rev. 52,
1321–1340 (2012)
16. Martinho, A., Herber, N., Kroesen, M., Chorus, C.: Ethical issues
in focus by the autonomous vehicles industry. Transp. Rev. (2021).
https:// doi. org/ 10. 1080/ 01441 647. 2020. 18623 55
17. Nunes, A., Reimer, B., Coughlin, J.F.: People must retain control
of autonomous vehicles. Nature 556, 169–171 (2018)
18. Pereira, R.H.M., Schwanen, T., Banister, D.: Distributive justice
and equity in transportation. Transp. Rev. 37(2), 170–191 (2017)
19. Santoni de Sio, F.: Killing by autonomous vehicles and the legal
doctrine of necessity. Ethical Theory Moral Pract. 20, 411–429
(2017)
20. Sparrow, R., Howard, M.: When human beings are like drunk
robots: driverless vehicles, ethics, and the future of transport.
Transp. Res. Part C Emerg. Technol. 80, 206–215 (2017)
21. Wiseman, Y., Grinberg, I.: Circumspectly crash of autonomous
vehicles. In: Proceedings of IEEE International Conference on
Electro Information Technology (EIT 2016), pp. 387–392. Grand
Forks, North Darkota, USA (2016).
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... On the other hand, there may exist intrinsic relations between crashes and impact factors (e.g. crash rate vs travel speed) [67] , and vice versa, which may generate endogeneity issue. Similarly, without taking into account of the endogenous variables, the model specification may be biased or the resulting impact may be postulated. ...
Article
Full-text available
Road safety has long been considered as one of the most important issues. Numerous studies have been conducted to investigate crashes with significant progress, whereas most of the work concentrates on the lifespan period of roadways and safety influencing factors. This paper undertakes a systematic literature review from the crash procedure to identify the state-of-the-art knowledge, advantages and disadvantages of crash risk, crash prediction, crash prevention and safety of connected and autonomous vehicles (CAVs). As a result of this literature review, substantive issues in general, data source and modeling selection are discussed, and the outcome of this study aims to provide the summary of crash knowledge with potential insight into both traditional and emerging aspects, and guide the future research direction in safety.
... Shown from this event, two critical issues can be extracted, safety and liability. When traffic crashes or fatalities occur, it is necessary to investigate how to determine the injury severity levels, and who is responsible for the crash, the AV itself, the owner or the administration department [More details can be referred to Li et al. (2022)]. Nowadays, the AVs' R&D and on-site testing have been growing vigorously all over the world, and the commercialization will be realized in the near future. ...
Article
Full-text available
Purpose – This study aims to investigate the safety and liability of autonomous vehicles (AVs), and identify the contributing factors quantitatively so as to provide potential insights on safety and liability of AVs. Design/methodology/approach – The actual crash data were obtained from California DMV and Sohu websites involved in collisions of AVs from 2015 to 2021 with 210 observations. The Bayesian random parameter ordered probit model was proposed to reflect the safety and liability of AVs, respectively, as well as accommodating the heterogeneity issue simultaneously. Findings – The findings show that day, location and crash type were significant factors of injury severity while location and crash reason were significant influencing the liability. Originality/value – The results provide meaningful countermeasures to support the policymakers or practitioners making strategies or regulations about AV safety and liability.
Article
Advances in deep learning have revolutionized cyber‐physical applications, including the development of autonomous vehicles. However, real‐world collisions involving autonomous control of vehicles have raised significant safety concerns regarding the use of deep neural networks (DNNs) in safety‐critical tasks, particularly perception. The inherent unverifiability of DNNs poses a key challenge in ensuring their safe and reliable operation. In this work, we propose perception simplex (), a fault‐tolerant application architecture designed for obstacle detection and collision avoidance. We analyse an existing LiDAR‐based classical obstacle detection algorithm to establish strict bounds on its capabilities and limitations. Such analysis and verification have not been possible for deep learning‐based perception systems yet. By employing verifiable obstacle detection algorithms, identifies obstacle existence detection faults in the output of unverifiable DNN‐based object detectors. When faults with potential collision risks are detected, appropriate corrective actions are initiated. Through extensive analysis and software‐in‐the‐loop simulations, we demonstrate that provides deterministic fault tolerance against obstacle existence detection faults, establishing a robust safety guarantee.
Preprint
As Autonomous Vehicle (AV) development has progressed, concerns regarding the safety of passengers and agents in their environment have risen. Each real world traffic collision involving autonomously controlled vehicles has compounded this concern. Open source autonomous driving implementations show a software architecture with complex interdependent tasks, heavily reliant on machine learning and Deep Neural Networks (DNN), which are vulnerable to non deterministic faults and corner cases. These complex subsystems work together to fulfill the mission of the AV while also maintaining safety. Although significant improvements are being made towards increasing the empirical reliability and confidence in these systems, the inherent limitations of DNN verification create an, as yet, insurmountable challenge in providing deterministic safety guarantees in AV. We propose Synergistic Redundancy (SR), a safety architecture for complex cyber physical systems, like AV. SR provides verifiable safety guarantees against specific faults by decoupling the mission and safety tasks of the system. Simultaneous to independently fulfilling their primary roles, the partially functionally redundant mission and safety tasks are able to aid each other, synergistically improving the combined system. The synergistic safety layer uses only verifiable and logically analyzable software to fulfill its tasks. Close coordination with the mission layer allows easier and early detection of safety critical faults in the system. SR simplifies the mission layer's optimization goals and improves its design. SR provides safe deployment of high performance, although inherently unverifiable, machine learning software. In this work, we first present the design and features of the SR architecture and then evaluate the efficacy of the solution, focusing on the crucial problem of obstacle existence detection faults in AV.
Preprint
Perception of obstacles remains a critical safety concern for autonomous vehicles. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Open source autonomous driving implementations show a perception pipeline with complex interdependent Deep Neural Networks. These networks are not fully verifiable, making them unsuitable for safety-critical tasks. In this work, we present a safety verification of an existing LiDAR based classical obstacle detection algorithm. We establish strict bounds on the capabilities of this obstacle detection algorithm. Given safety standards, such bounds allow for determining LiDAR sensor properties that would reliably satisfy the standards. Such analysis has as yet been unattainable for neural network based perception systems. We provide a rigorous analysis of the obstacle detection system with empirical results based on real-world sensor data.
Article
Full-text available
The onset of autonomous driving has provided fertile ground for discussions about ethics in recent years. These discussions are heavily documented in the scientific literature and have mainly revolved around extreme traffic situations depicted as moral dilemmas, i.e. situations in which the autonomous vehicle (AV) is required to make a difficult moral choice. Quite surprisingly, little is known about the ethical issues in focus by the AV industry. General claims have been made about the struggles of companies regarding the ethical issues of AVs but these lack proper substantiation. As private companies are highly influential on the development and acceptance of AV technologies, a meaningful debate about the ethics of AVs should take into account the ethical issues prioritised by industry. In order to assess the awareness and engagement of industry on the ethics of AVs, we inspected the narratives in the official business and technical reports of companies with an AV testing permit in California. The findings of our literature and industry review suggest that: (i) given the plethora of ethical issues addressed in the reports, autonomous driving companies seem to be aware of and engaged in the ethics of autonomous driving technology; (ii) scientific literature and industry reports prioritise safety and cybersecurity; (iii) scientific and industry communities agree that AVs will not eliminate the risk of accidents; (iv) scientific literature on AV technology ethics is dominated by discussions about the trolley problem; (v) moral dilemmas resembling trolley cases are not addressed in industry reports but there are nuanced allusions that unravel underlying concerns about these extreme traffic situations; (vi) autonomous driving companies have different approaches with respect to the authority of remote operators; and (vii) companies seem invested in a lowest liability risk design strategy relying on rules and regulations, expedite investigations, and crash/collision avoidance algorithms.
Article
Full-text available
With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement.
Article
Full-text available
How should autonomous vehicles (aka self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between causing different damages or losses to different objects or persons? This paper addresses this ethical question starting from the normative principles elaborated in the law to regulate difficult choices in other emergency scenarios. In particular, the paper offers a rational reconstruction of some major principles and norms embedded in the Anglo-American jurisprudence and case law on the “doctrine of necessity”; and assesses which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of autonomous vehicles in emergency situations. The paper covers the following topics: the distinction between “justification” and “excuse”, the legal prohibition of intentional killing outside self-defence, the incommensurability of goods, and the legal constrains to the use of lethal force set by normative positions: obligations, responsibility, rights, and authority. For each of these principles and constrains the possible application to the programming of autonomous vehicles is discussed. Based on the analysis, some practical suggestions are offered.
Article
Full-text available
Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the Maximin procedure is what self-interested agents would use from an original position, and then show how the Maximin procedure can be operationalized to produce unique outputs over probabilities of survival.
Article
Full-text available
Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.
Article
Full-text available
Over the past decades, transport researchers and policy-makers have devoted increasing attention to questions about justice and equity. Nonetheless, there is still little engagement with theories in political philosophy to frame what justice means in the context of transport policies. This paper reviews key theories of justice (utilitarianism, libertarianism, intuitionism, Rawls’ egalitarianism, and Capability Approaches (CAs)) and critically evaluates the insights they generate when applied to transport. Based on a combination of Rawlsian and CAs, we propose that distributive justice concerns over transport disadvantage and social exclusion should focus primarily on accessibility as a human capability. This means that, in policy evaluation, a detailed analysis of the distributional effects of transport policies should take account of the setting of minimum standards of accessibility to key destinations and the extent to which these policies respect individuals’ rights and prioritise disadvantaged groups, reduce inequalities of opportunities, and mitigate transport externalities. A full account of justice in transportation requires a more complete understanding of accessibility than traditional approaches have been able to deliver to date.
Article
Legislation on the testing of self-driving cars does not address liability and safety concerns, warn Ashley Nunes, Bryan Reimer and Joseph F. Coughlin. Legislation on the testing of self-driving cars does not address liability and safety concerns, warn Ashley Nunes, Bryan Reimer and Joseph F. Coughlin.
Article
It is often argued that driverless vehicles will save lives. In this paper, we treat the ethical case for driverless vehicles seriously and show that it has radical implications for the future of transport. After briefly discussing the current state of driverless vehicle technology, we suggest that systems that rely upon human supervision are likely to be dangerous when used by ordinary people in real-world driving conditions and are unlikely to satisfy the desires of consumers. We then argue that the invention of fully autonomous vehicles that pose a lower risk to third parties than human drivers will establish a compelling case against the moral permissibility of manual driving. As long as driverless vehicles aren’t safer than human drivers, it will be unethical to sell them. Once they are safer than human drivers when it comes to risks to 3rd parties, then it should be illegal to drive them: at that point human drivers will be the moral equivalent of drunk robots. We also describe two plausible mechanisms whereby this ethical argument may generate political pressure to have it reflected in legislation. Freeing people from the necessity of driving, though, will transform the relationship people have with their cars, which will in turn open up new possibilities for the transport uses of the automobile. The ethical challenge posed by driverless vehicles for transport policy is therefore to ensure that the most socially and environmentally beneficial of these possibilities is realised. We highlight several key policy choices that will determine how likely it is that this challenge will be met.
Article
With the potential to save nearly 30 000 lives per year in the United States, autonomous vehicles portend the most significant advance in auto safety history by shifting the focus from minimization of postcrash injury to collision prevention. I have delineated the important public health implications of autonomous vehicles and provided a brief analysis of a critically important ethical issue inherent in autonomous vehicle design. The broad expertise, ethical principles, and values of public health should be brought to bear on a wide range of issues pertaining to autonomous vehicles.