ArticlePDF Available

People perception of autonomous vehicles: Legal and ethical issues

Authors:

Abstract and Figures

In past five years, self-driving or autonomous cars have achieved a great milestone that now they are commercially available as the Waymo Company emerged from Google autonomous car has started its service in suburbs of Phoenix from December 2018. Autonomous cars are able to sense their surrounding environment and their control system is able to interpret that information to identify navigation paths, road barriers and traffic signals. The journey of autonomous cars produced many questions regarding legal and ethical issues of autonomous cars. These cars combine a variety of information from its sensors including radars, lidar, sonar, GPS and odometry for real-time decision making. In this article, we have reviewed the literature to understand primary questions related to liability, ethics, and legal issues. From the last six years, 523 papers were selected, which were further shortlisted on the basis of relevance. Finally, 84 papers were shortlisted to conclude the discussion on tort liability, products, and strict liability. The utilitarianism, deontological and virtue ethics theories were discussed in terms of writing ethical code for designing autonomous cars. These cars are expected to decrease accidents by centralized traffic system through inter-vehicle communication. Furthermore, an online survey of 2021 participants was conducted in five different countries to understand perception and trust on different autonomy levels in self-driving cars. The participants showed their level of trust from a safety perspective, their concern about one time cost in the start, need of legislation by local governments and fear of rising in unemployment due to autonomous cars. The results indicate mix perception where people want this technology but are concerned about legal and ethical implications. The paper is helpful for researchers, manufacturers and law enforcement agencies in the implementation of autonomous cars.
Content may be subject to copyright.
A preview of the PDF is not available
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Cloud computing is an effective, fast growing, and low cost way for today's businesses organizations to provide professional and IT services over the Internet. Cloud computing has variety of service and deployment models providing both solid support to organizations to secure their business interests and flexibility to deliver new services. Security threats are associated with each service and deployment model, vary and depend on wide range of factors including the sensitivity of information, resources and architectures. Over time, business and cloud organizations tend to tight their security posture. For effective threat management, cloud service provider must perform threat assessments on regular basis. Threat Agent is an individual or group that exploit vulnerabilities, manifest a threat and conduct damaging activities. An alerting position arises when threat agent breach security and leaks confidential and sensitive information of organization. To cater this situation, we have proposed Agent Based Information Security Threat Management Framework that ensures threat mitigation in Cloud environment. The core objectives of this article is to present Agent based information security threat management framework for better understanding from threat identifying process to apply countermeasures. We also introduced software and intelligent agent concepts that gather appropriate, relevant, variety of information relates to Information Security to use in proposed framework and to develop system that facilitates organization to define, update, propose, validate and apply measure against each threat agents. The proposed framework validated using fuzzy logic inference system and simulated and tested through MATLAB®. The proposed framework covers all cloud services and deployment models. Cloud organizations can apply this framework to their organizations to mitigate threats.
Article
Full-text available
Self-adaptive systems (SAS) can modify their behavior during execution; this modification is done because of change in internal or external environment. The need for self-adaptive software systems has increased tremendously in last decade due to ever changing user requirements, improvement in technology and need for building software that reacts to user preferences. To build this type of software we need well establish models that have the flexibility to adjust to the new requirements and make sure that the adaptation is efficient and reliable. Feedback loop has proven to be very effective in modeling and developing SAS, these loops help the system to sense, analyze, plan, test and execute the adaptive behavior at runtime. Formal methods are well defined, rigorous and reliable mathematical techniques that can be effectively used to reason and specify behavior of SAS at design and run-time. Agents can play an important role in modeling SAS because they can work independently, with other agents and with environment as well. Using agents to perform individual steps in feedback loop and formalizing these agents using Petri nets will not only increase the reliability, but also, the adaptation can be performed efficiently for taking decisions at run time with increased confidence. In this paper, we propose a multi-agent framework to model self-adaptive systems using agent based modeling. This framework will help the researchers in implementation of SAS, which is more dependable, reliable, autonomic and flexible because of use of multi-agent based formal approach.
Article
Full-text available
Fast advances in autonomous driving technology trigger the question of suitable operational models for future autonomous vehicles. A key determinant of such operational models’ viability is the competitiveness of their cost structures. Using a comprehensive analysis of the respective cost structures, this research shows that public transportation (in its current form) will only remain economically competitive where demand can be bundled to larger units. In particular, this applies to dense urban areas, where public transportation can be offered at lower prices than autonomous taxis (even if pooled) and private cars. Wherever substantial bundling is not possible, shared and pooled vehicles serve travel demand more efficiently. Yet, in contrast to current wisdom, shared fleets may not be the most efficient alternative. Higher costs and more effort for vehicle cleaning could change the equation. Moreover, the results suggest that a substantial share of vehicles may remain in private possession and use due to their low variable costs. Even more than today, high fixed costs of private vehicles will continue to be accepted, given the various benefits of a private mobility robot.
Conference Paper
Full-text available
The automotive domain is undergoing a tremendous transformation in the speed and depth of technological development in recent years. Most of the innovations are based on electronics and ICT. As it is the case for most ICT-based systems, there are increasing concerns about security and privacy in the automotive domain. In this paper, we present a technical and social analysis of this issue using a methodological scenario building approach. We believe that current and future solutions must take both technical and social aspect into consideration. Our analysis provides stakeholders with such a view.
Book
As a game-changing technology, robotics naturally will create ripple effects through society. Some of them may become tsunamis. So it’s no surprise that “robot ethics”-the study of these effects on ethics, law, and policy-has caught the attention of governments, industry, and the broader society, especially in the past several years. Since our first book on the subject in 2012, a groundswell of concern has emerged, from the Campaign to Stop Killer Robots to the Campaign Against Sex Robots. Among other bizarre events, a robot car has killed its driver, and a kamikaze police robot bomb has killed a sniper. Given these new and evolving worries, we now enter the second generation of the debates-robot ethics 2.0. This edited volume is a one-stop authoritative resource for the latest research in the field, which is often scattered across academic journals, books, media articles, reports, and other channels. Without presuming much familiarity with either robotics or ethics, this book helps to make the discussion more accessible to policymakers and the broader public, as well as academic audiences. Besides featuring new use-cases for robots and their challenges-not just robot cars, but also space robots, AI, and the internet of things (as massively distributed robots)-we also feature one of the most diverse group of researchers on the subject for truly global perspectives.
Conference Paper
Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.
Article
Self-driving cars are gradually being introduced in the United States and in several Member States of the European Union. Policymakers will thus have to make important choices regarding the application of the law. One important aspect relates to the question who should be held liable for the damage caused by such vehicles. Arguably, product liability schemes will gain importance considering that the driver's fault as a cause of damage will become less likely with the increase of autonomous systems. The application of existing product liability legislation, however, is not always straightforward. Without a proper and effective liability framework, other legal or policy initiatives concerning technical and safety matters related to self-driving cars might be in vain. The article illustrates this conclusion by analysing the limitation periods for filing a claim included in the European Union Product Liability Directive, which are inherently incompatible with the concept of autonomous vehicles. On a micro-level, we argue that every aspect of the Directive should be carefully considered in the light of the autonomisation of our society. On the macro-level, we believe that ongoing technological evolutions might be the perfect moment to bring the European Union closer to its citizens.
Article
At a test track east of Gothenburg, Sweden, people are ushered into autonomous vehicles for a test drive. But there's a twist: The vehicles aren't actually autonomous-there's a hidden driver in the back-and the people are participating in an experiment to discover how they'll behave when the car is chauffeuring them around. At Zenuity-a joint venture between Volvo and Autoliv, a Swedish auto-safety company-this test is just one of many ways we make sure not just that autonomous vehicles work but that they can drive more safely than humans ever could. If self-driving cars are ever going to hit the road, they'll need to know the rules and how to follow them safely, regardless of how much they might depend on the human behind the wheel.