ArticlePDF Available

Regulating Autonomy: An Assessment of Policy Language for Highly Automated Vehicles

Authors:

Abstract and Figures

Self‐driving cars (also known as driverless cars, autonomous vehicles, and highly automated vehicles [HAVs]) will change the regulatory, political, and ethical frameworks surrounding motor vehicles. At the highest levels of automation, HAVs are operated by independent machine agents, making decisions without the direct intervention of humans. The current transportation system assumes human intervention though, including legal and moral responsibilities of human operators. Has the development of these artificial intelligence (AI) and autonomous system (AS) technologies outpaced the ethical and political conversations? This paper examines discussions of HAVs, driver responsibility, and technology failure to highlight the differences between how the policy‐making institutions in the United States (Congress and the Public Administration) and technology and transportation experts are or are not speaking about responsibility in the context of autonomous systems technologies. We report findings from a big data analysis of corpus‐level documents to find that enthusiasm for HAVs has outpaced other discussions of the technology.
Content may be subject to copyright.
1
Regulating Autonomy: An Assessment of Policy Language for
Highly Automated Vehicles
Beth-Anne Schuelke-Leech, basl@uwindsor.ca
Sara Jordan-Mattingly,
Betsy Barry
Preprint for article in Review of Policy Research, Vol. 36, No. 4, pp. 547-579
Abstract
Winner (2014) posited that technological development often outpaces the conversation about
the implication of technology. In his discussion of “technological somnambulism” he describes
the phenomenon but does not measure the distance, conceptually speaking, between
technological development and technology policy. In this paper, we strive to answer the
question: what is the difference between the technological conversation and the political
conversation surrounding responsibility for Highly Automated Vehicles?
The development of autonomous system (AS) technologies has outpaced the ethical and
political conversations surrounding these technologies. How far has the technological
conversation outpaced the ethical conversation? Here we describe how the chief policy-making
institutions in the United StatesCongress, the Courts, and the Public Administrationare or
are not speaking about autonomous systems technologies. We focus on the specific application
of AS to consumer transportation goods, looking at discussions of responsibility and
accountability for operation of highly automated vehicles (HAVs), commonly known as
driverless cars. We focus on the language each institution uses discuss responsibility in the
context of HAV. We report findings from a big data review of corpus-level documents from four
policy institutions to find that enthusiasm for the technology has outpaced policy and ethical
discussion of the technology.
2
Introduction
Scholars of science and technology studies posit that the pace of technological development
exceeds the pace of discussion of the political, economic, and ethical implications of that
technology (Winner 2014). Can the difference in the pace of discussion be measured? What is
the difference between technological and political discussions of technology? The purpose of
this article is to offer text data analytics as a method for the measurement of the difference
between conversations among technologically focused commentators and politically salient
regulators and to show how this method might illustrate this difference. We use this method to
discuss differences in the discussions of the publicly salient case of autonomous systems
technologies, focusing specifically on development of highly automated vehiclesdriverless
cars.
Public encounters with the technological capabilities of autonomous systems, such as
unmanned aerial systems (aka drones) and autonomous vehicles (aka driverless cars and
“highly automated vehicles” (HAV)), has increased substantially in recent years. Google’s
nascent fleet of autonomous vehicles have driven over 1.4 million miles, often on public roads
(Crane, Logue, & Pilz, 2016). This and the expansion of HAV technology to taxi service in
multiple cities has created the impression that widespread availability of driverless vehicles is
close to hand (Lee 2017). The increased contact with this technology has propelled public
concern about these technologies which, in turn, has motivated policymakers, whether at the
federal or state level, to attend to the governance of these technologies. While there are now
11 states that have enacted legislation and regulations governing the testing and operation of
highly automated vehicles (National Conference of State Legislatures, 2017), states and
corporations are looking to the federal government to lead the way in regulating the safe
operation of these new technologies (NHTSA 2017).
There is a clear desire among policy makers and technical experts to ensure that these
technologies, particularly those where no human is “in the loop”, such as SAE Leve 4 and 5
vehicles, are developed in a safe and appropriate manner. Essential to the safe deployment of
such vehicles is a rigorous testing regime that provides sufficient data to determine safety
performance and help policymakers at all levels make informed decisions about deployment
(NHTSA, 2016a). For example, the National Highway Traffic Safety Administration (NHTSA)
released a statement in January 2016 outlining plans to improve the development and
deployment of highly automated vehicles (HAVs) (NHTSA, 2016b). Since the 2016 statement,
NHTSA has promulgated additional rules and guidance documents (NHTSA-2016-0090; NHTSA
2016c; NHTSA 2017a; NHTSA 2017b).
While the pace of NHTSA policy development in this area is quicker than in other areas, policy
for autonomous systems technology is still behind the technology itself. As the recent problem
with the implementation of the regulations regarding recreational and commercial use of
unmanned aerial systems (the Federal Aviation Administration 333 exemption) illustrated, the
tools of policy making almost always lag behind the application of new technologies (Hayhurst,
3
Maddalon, Neogi, & Vertstynen, 2016). The FAA 333 exemption for the use of unmanned aerial
systems without the standard requirements of “a certificated and registered aircraft, a licensed
pilot, and operational approval” caused regulatory havoc for businesses around the US, from
small scale agricultural operators to large research-intensive universitiesover 7300
organizations petitioned the FAA for an exemption to the rules (F. A. Administration, 2012;
Bellows, 2013). The FAA 333 exemption problem, while interesting in its own right, seems to be
yet another example of technology disrupting policy (Federal Aviation Administration, 2016).
From the perspective of science and technology policy studies, it would seem that policymakers
are sleepwalking into a minefield of competing policy perspectives. If policymakers are asleep,
what is the risk to the public? The purpose of this article is to ascertain whether the degree of
public risk can be judged (partially because indirectly) based upon the narrative coherence
between regulators and technologists on key issues pertinent to public safety. We posit that a
high degree of conceptual difference will show a larger distance and thus will correspond to
greater risk of public harm while regulators “wake up” and “catch up”. We surmise that kernels
of narrative coherence between federal policy making discussions and technologists’
discussions can offer wisdom and guidance for establishing coherent policy concerning safety
and responsibility for HAVs. To find narrative coherence or incoherence, we use a big data
analytics technique to examine four corpora of documents-- from Congress, bureaucratic
officials from the federal government, U.S. federal Courts, and technological entrepreneurs--
over an extended period of years, to determine what is similar or different between what all
parties say are the responsibilities of designers and users of HAVs.
The Case of Autonomous Vehicles
Defining Autonomous Vehicles
The phrase autonomous vehicle is a provocative one that conjures images of empty driver’s
seats in moving vehicles heading to directions unknown. The mélange of terms used to
describe these entitiesdriverless cars, autonomous vehicles, ground-based dronescreates
its own confusion. In order to refine the language in this technological area, the Society of
Automotive Engineers and the NHTSA have designated six levels of automation, prompting the
relevant agencies to (attempt to) change the policy language governing these technologies
from “autonomous vehicles” to “highly automated vehicles”. (It is not clear, however, that this
language change has permeated the public discussion of the technology as the phrase
autonomous vehicle or self-driving car seems to persist).
According to the National Highway Transportation Safety Administration there are six
operational levels of autonomy for vehicleslevel 0 through level 5 (NHTSA 2013, 2016;
Anderson et al., 2014; Bar-Yam, 2004). Adopted from the SAE International definitions for
levels of automation, it is apparent that a regulatory change will need to be in place for
operation of Level 3 through Level 5 vehicles. In their most recent policy document, the NHTSA
and DOT make it clear that policy change will emerge as the level of autonomy increases from
4
human operated to “highly automated” (Goodall, 2014; Merat, Jameson, Lai, Daly, & Carsten,
2014). The levels of driving automation can be succinctly explained by pointing out the obvious
change between Levels 0, 3 and 5 vehicles:
At SAE Level 0, the human driver does everything;
At SAE Level 3, an automated system can both actually conduct some parts of the driving
task and monitor the driving environment in some instances, but the human driver must
be ready to take back control when the automated system requests [it];
At SAE Level 5, the automated system can perform all driving tasks, under all conditions
that a human driver could perform them (NHTSA 2016b, p. 9).
Whatever they are called, HAVs are proposed to solve a consistent and deadly problem.
According to analysis of recent driver interaction studies, such as the SHRP2 study (Strategic
Highway Research Program), it is the drivers themselves that are the primary cause of incidents
on roadways (Dingus, Klauer, et al., 2006; Dingus, Neale, Klauer, Petersen, & Carroll, 2006;
Klauer et al., 2014). Driver inattentiveness, fatigue, intoxication, or attention to other
problems, such as addressing unruly children in the back seat of a vehicle, are the primary
causes of most vehicle crashes. Indeed, as has been advertised by the New York City
Department of Transportation in the “Vision Zero” campaign there are no “accidents” on the
roads, only missteps (York, 2016). All problems of driver incident, according to this new
campaign, are related to driver inattentiveness. Promises of a system that takes the fallible
driver out of the equation and replaces it with an “all-seeing”, never-fatiguing, algorithmically-
driven system, suggest a future with a high degree of roadway safety.
The use of the term driver for a level 4 or 5 highly automated vehicle seems to be itself
problematic. Instead, terms such as user, navigator, director, and or pilot seem to be more
appropriate. This is because, in a level 4 or 5 vehicle, the users have no direct control over the
operation of the vehicle; they present a mission to the vehicle and are taken there as
passengers. The term “pilot” invokes perhaps the closest available heuristic for us to
understand the use of this tool: a Level 3 vehicle will work in ways similar to how systems such
as autopilot in commercial aircraft or in cargo aircraft work in order to ensure fewer air traffic
problems and incidents. The term “user” conjures the idea of an occupant in the vehicle
treating the vehicle as a “mere instrument” to satisfy their need to arrive at point B from point
A. However, this “consumer” model does not fully account for the potential problem of
assigning responsibility for the conduct of the vehicle on the way from points A to B.
The introduction of highly automated or autonomous vehicles raises complicated
questions for what we are willing to permit pilots (or users) to do on roadways. At present,
attentive drivers will recognize that they are restricted from doing a number of things that they
might wish to do: texting, answering emails, speaking on the phone using a hand-held operating
device, not wearing seatbelts while underway, or driving under the influence of drugs and
alcohol (Harrison, 2011; Wilson & Stimson, 2008). In an autonomous system, however, there is
5
no opportunity for the pilot to be directly in control of the vehicle, which presents questions of
whether sobriety and attentiveness are required by the pilots. In fact, it is just as likely that
users of HAVs might want to totally disengage from the driving process and instead focus on
some other activity (e.g., answering email, working, sleeping, reading, watching a film, or
engaging in some other forms of entertainment) (Gold & Bengler, 2014). These things may
seem out of reach at present but, if HAVs operate as promised, there is no transportation
safety-centric reason to believe that individuals should not be engaged in any of these
activities.
The most potentially disruptive feature of an autonomous vehicle is replacement of the
decision-making of the ideal or “good” driver. The ideal driver-- a sober, alert, awake, and
attentive driver-- accomplishes all of the functions that a fully autonomous vehicle system
would automate. This includes understanding the rules of the road and following these rules
under all conditions (e.g., remaining in one's lane, not crossing into the path of oncoming
traffic, breaking in a smooth and controlled manner, managing blind spots and engaging in
competent parking) (Dingus, Klauer, et al., 2006). Autonomous vehicles will use tools, such as
radar, LIDAR (light detection and ranging), ultrasound, and other sensing equipment, to build a
vehicle that is able to ensure that humans reach their destinations safely and securely every
journey. There are also some expectations that HAVs will reduce traffic congestion, improve
fuel usage, and increase individual vehicle usage through the ability to manage and optimize
the whole transportation systems.
If projected coverage figures are met, fully autonomous passenger vehicles (Highly Automated
Vehicles or HAVs) will significantly disrupt ordinary transportation (Katyal, 2013; Thierer &
Hagemann, 2015). The disruption of HAVs will be more than merely technological: there is
concern about how HAVs will operate safely and reliably on the open roadways, how HAVs will
be regulated, and how HAVs will make fraught, ethically troublesome, human decisions (Davies,
2016). Public and corporate demand for regulations to outline requirements for safety are
prompting development of new measures designed to govern the technical capacities of the
product. Other regulatory efforts, yet short of regulation, will address the normative concerns
that the public has about the use of the technology (Meseko, 2014).
“Who is responsible?” for the safe conduct of HAVs is the primary technical and ethical
question for regulators to answer. The answer to this question is not straightforward. Though
the licensing of drivers and inspection of vehicles is a state responsibility, many of the issues of
vehicle development and safety are the responsibility of the federal government in the United
States. The federal government has designated the National Highway Traffic Safety
Administration (NHTSA) and the U.S. Department of Transportation (U.S. DOT) as the agencies
responsible for establishing the standards for safety in the transportation system. It is these
federal organizations which govern and regulate the development of conventional vehicles,
primarily through the Federal Motor Vehicle Safety Standards (FMVSS), which set a minimum
standard for vehicle safety (Crane et al., 2016).
6
Highly automated vehicles will be governed under a more complex regulatory framework
because HAVs design and operations bring together the traditional vehicle safety standards
with many other regulatory structures, including those pertinent to communication,
cybersecurity, and privacy regulations. The current regulatory framework is fragmented so that
that communications regulations, which are integral to autonomous guidance systems like
vehicle to vehicle communications (V2V), are not considered to be integral to the vehicle as a
regulated entity (NHTSA 2016d). This puts the communications system of an HAV outside of
the scope of the NHTSA, which only regulates components or devices attached to the car
(Jerome, 2016). Telecommunications and the internet standards are overseen by the Federal
Communications Commission and the Federal Trade Commission (New York Times Editorial
Board 2017).
The regulatory structure for HAVs will require more players than do conventional vehicles. We
represent the likely federal level regulatory and oversight agencies for autonomous vehicle
governance in Figure 1 “Federal Oversight Areas for Highly automated vehicles”.
Figure 1: Federal Oversight Areas for Highly automated vehicles
While the technological requirements for an autonomous vehicle will necessarily motivate
regulatory collaboration, it is unlikely that current federal guidelines, such as the Federal Motor
Vehicle Safety Standards (FMVSS), will be a clear exemplar for final regulations on HAVs.
Instead, there will likely be a network of regulatory instruments that come together for the
foreseeable future. For example, the NHTSA has issued a policy statement, initially released in
2013 and updated in September 2016, which recommends that states should establish
guidelines for testing HAVs (NHTSA, 2016a). Importantly, however, states will no longer have
the responsibility for certifying or licensing competence of human drivers: HAVs operate
7
independently of human operators. Later instruments, promulgated in 2017 rolled back this
decision and handed authority back to states and, importantly, to state and industry
cooperation. Via the model policy for states and under the limited regulatory mandate of the
new NHTSA administration, opportunities for experimentation with policy variation were
opened (NHTSA 2017a). Other important implications were retained however, such as the
formalization of the SAE levels of automation.
The recent guidance from the NHTSA outlines clearly that NHTSA and the Federal Government
has a set of responsibilities for highly automated vehicles, while the states and localities have
other responsibilities (NHTSA, 2016a). As designated by the Society for Automotive Engineers
(SAE, 2014), Level 4 (High) and Level 5 (Full) assume that the vehicle is able to operate the
vehicle independent of human intervention and monitoring (SAE, 2015). The U.S. Department
of Transportation assumes that no state-licensing of a human driver is required after Level 3 [1].
Therefore, the safety concerns associated with a human driver are expected to decrease
significantly if not completely with the deployment of HAVs, negating the need for state
intervention into licensed safe conduct of the socio-technical system of HAVs. However, many
of the FMVSS standards continue to assume the presence of a human driver which complicates
the definition of responsible and safe conduct (Kim, Perlman, Bogard, & Harrington, 2016). As
Kim et al details:
DOT and the Federal Government are responsible for regulating motor vehicles and
motor vehicle equipment, and States are responsible for regulating the human driver
and most other aspects of motor vehicle operation. As motor vehicle equipment
increasingly performs “driving” tasks, DOTs exercise of its authority and responsibility to
regulate the safety of such equipment will increasingly encompass tasks similar to
“licensing” of the non-human “driver” (e.g., hardware and software performance part or
all of the driving task) (p.38).
If the conventional domains of responsibility in automotive and transportation engineering are
unmoored by this technology, how much disruption to policy and public safety can be forecast?
AVs as Disruptive Technologies
Not all emerging technologies cause significant disruptions to the fabric of communities.
Disruptive technologies change the way in which we view ourselves, how we relate to one
another, how we relate to machines, and how we relate to our government. But, not all
disruptive technologies do not come to the scene quickly. Emerging technologies may come
subtly to the scene of daily life, brought in through occasional crossovers from specific
industries or groups (Markides, 2006). Wearable health monitors, such as FitBits, are an
example. Borrowed from monitoring devices engineered to treat or ameliorate disease, these
technologies are now burgeoning sources of material for the “datafied self” and the internet of
things.
8
Some technologies, however, are more disruptive than others (Jang, 2013). As Christensen
(1997) made clear in his work, disruptive technologies create new markets and new values that
were previously unseen in the conventional marketplace. From the perspective of legislation,
disruptive technologies change the dynamics of knowledge, expertise, and legitimacy (Hart &
Christensen, 2002). Within the context of HAVs, questions about knowledge, authority, and
legitimacy are already present. From the perspective of technological regulation of HAVs,
safety regulations for traffic and for operations are grounded on the paradigm of a driver
piloting the vehicle which, in turn, rely historically on the idea of a driver running a team of
horses. Even the language of safety, such as drivetrains measured in horsepower and
passengers restrained in seats with steering wheel airbags, does not accommodate the context
of HAVs readily. In HAVs, the human occupants are not the “driver”; the driver is a computer
algorithm and airbags may not be necessary to deploy in an incident involving a vehicle without
passengers operating independently (Thierer & Hagemann, 2015).
From the perspective of ethically relevant regulation, the public is waiting for government
officials to establish law and regulations that govern the use of vehicles that are themselves
agent-actors, not mere instruments (Narla, 2013). Regulating the conduct of artificial
intelligence (AI) and machine learning systems presents new challenges for determination of
“driver’s” responsibility and liability, challenges the paradigm of vehicles as a consumer partner
or substitute rather than a consumer product, and creates substantial concerns about user
privacy if HAVs were used as “ground-based surveillance drones” as some strong skeptics have
argued. We now turn to the challenges to ethical perceptions raised by HAVs.
Disruptions to Ethical Perceptions
Highly automated vehicles (HAVs) seem to hold out the substantial promise for making us safer,
faster, better rested, and more productive. Those at levels 4 and 5 of the NHTSA automation
levels are also thought to present challenges to foundational notions of human ethical [2].
Multiple authors have argued that HAVs challenge our sacred definitions of what it means to be
a human in responsible control of a machine or process, particularly in a situation of harm [3-6].
These arguments and those addressed to specific technologies embedded in HAVs, such as
artificial intelligence, machine learning, and computer vision, have also brought forth some
uncomfortable realizations that beliefs about good actions vary widely enough that policies
designating a choice will be “wrong” for some and paternalistic for others (IEEE Ethically
Aligned Design 2017).
Presently, the most troubling ethical problem for policy makers seems to be determining how
to incorporate HAVs into a predefined scheme of driver liability and responsibility.
1
Under most
definitions of what it means to be a responsible person, the responsible actor controls his or
her thoughts, choices, actions, and tools to meet some end s/he has chosen or approved of.
1
The legal aspects of liability have been discussed at length elsewhere (Colonna 2012; Thierer and Hagemann 2015)
but not resolved. Here, we address the higher level issue, still motivating for legal discussions, of responsibility.
9
Under this idea, a responsible driver is a human agent that plans and executes the use of a
vehicle to some attributable end. Within the context of HAVs, the quality of responsibility that
attaches to the human driver becomes irrelevant. The technology of the vehicle takes the place
of a thinking, choosing, and acting agent; the vehicle becomes the ethical actor and the
technology the ethical reasoning system.
HAVs were certainly not designed to not to create a surfeit of irresponsible or unethical drivers.
Instead, HAVs are supposed to remove the chances for even the most responsible driver to
make an inadvertently poor choice. Even if HAVs strip the proximate driver of responsibility,
there are still background agents that can assume responsibility. An example of this ethically
relevant redundancy include Level 3 automation, such as forward braking capacity.
Unfortunately, our ethical and policy language does not disambiguate responsibilities according
to the in-vivo and in-silica (computer) states of the ethical agents. A distinct notion of HAV
relevant responsibility, wherein responsibility is assigned to collectively acting, but temporally
and professionally disaggregated actors, is not part of our common ethical or political parlance.
Yet, the matter of responsibility weighs heavily in the moral imagination of the public and policy
makers.
Is Responsibility a Pertinent Term for HAVs?
In the paragraphs above, we outlined that there will be no responsible driveran agent who
makes decisions about how to pilot a vehicle safelyin an HAV as the driving task is fully
delegated to level 4 or level 5 socio-technical systems. Yet, as revealed in the discussions of the
ethics of HAVs, there remain concerns about who might be held responsible if HAVs fail in their
promises of fully safe operation [2]. Given the disruption that HAVs bring to discussions of
responsibility how ought the term be reimagined for this context?
The problem of autonomous system responsibility is a two-fold conceptual problem: first, the
matter of inscribing responsible agency to HAVs and second, the matter of ascribing moral
responsibility to HAVs. Inscribing responsible agency entails determining what actors, such as
HAVs, do to make responsible decisions. Ascribing responsible agency to HAVs requires
offering boundary conditions against which HAV responsibility and human responsibility will be
measured. With respect to inscribing responsibility, the promise of HAVs includes the
possibility that pilots of HAVs become consumers of a prior group’s designations of acceptable
thoughts and actions. This presents HAV pilots with a condition of heteronomy (multiple selves
legislating) rather than autonomy. Consequently, questions arise regarding the degree of
responsibility that the pilot could have for final decisions made by the vehicle. In the context of
Level 3 vehicles, drivers may choose to ignore the nudges of their automated systems as they
remain in control of the vehicle. True HAVs, wherein there is no human in the loop present a
genuine challenge of machine heteronomy: only the responsibility, the safety functions,
inscribed in to the vehicle is what could be characterized as effective agency. In this case, the
inscription of responsible thought leads to a certain path for ascribed responsibility: HAVs
10
inscribed with thinking powers that cause an action are responsible, humans in the vehicle are
not responsible because they did no thinking that was a proximate cause of action.
The problem of ascribing responsibility is perhaps the more relevant to policy discussions. If an
HAV is responsible for decision-making (choosing) tasks, then this raises the thorny issue of
HAVs as being explicitly ethical agents who could, if requested to do so, give an account for its
thinking. To give full account for one’s thoughts is to reveal autonomous decisions. Yet, in the
case of the HAV, they make proximate decisions for actions based upon a series of instructions
determined for it by others. This connection between proximate-cause choice for action and
latent instruction-giving that structures cognitive architecture for choice is related to the
fundamental problem of freewill and determinism in ascribing agents’ ethical responsibility
(Dworkin 1970; Kane 1999) .
Being able to answer for actions is the key thrust of contemporary arguments concerning
ethical responsibility. Addressing the question of an HAVs ability to answer and the obligation
of vehicle manufacturers to equip their units with mechanisms to answer seems to be a key
component of the policy and ethical debate around HAVs (IEEE Global Initiative 2015, 2017).
What the ability to answer requires is a connection between actions and the agent’s
autonomous judgments. This connection entails that actions or attitudes must be attributable
to them. Shoemaker, who calls this the “rational relations view”, suggests that:
One is responsible for one’s actions (they are attributable to one; one is answerable for
them) in virtue of their rational connection to one’s evaluative judgments. The actions I
perform ‘reflect my assessment of reasons, and therefore I can, in principle, be called
upon to defend them and am open to rational (and in some cases moral) criticism if an
adequate defense cannot be provided’. With the exception of very rare cases, my
actions will reflect my value judgments, and when doing so, they belong to me in the
sense appropriate for rational and moral appraisal; that is, I am answerable for them in
light of their connection to such judgments (Shoemaker 2011, p. 606).
Under the rational relations view, the ability to be responsible requires the ability to reason and
the capacity to have ones own thoughts and actions, independently of the influence of others.
HAVs will not have the ability to have a “self” to think, but will be a collection of the thoughts of
others, aggregated into decision-systems whose integration may not have been foreseen by any
actor to whom the final choice could be attributable.
One frequent component of discussions around driverless vehicles that shows the
difficulty of ascribing responsibility where inscribing responsibility was done via a network of
teams. The trolley problem presents decision-makers with the problem of choosing between
harming (e.g., killing) a small number or larger number of people through pulling a lever to
guide the pathway of a “runaway” trolley.
2
Scholars and public intellectuals who have
2
Variations on the trolley problem include attribution of socially desirable (e.g., children) or undesirable (e.g.,
rapist) characteristics to individuals who might be harmed by the action of the lever-pullers, attribution of socially
11
incorporated the trolley problem into discussions around inscribing responsibility into AVs have
detailed the myriad technical problems of achieving such refined vehicle sensing (Lin, 2013).
Others have used the trolley problem example as a heuristic to show how the competing
alternatives that are morally acceptable for HAVs may not be morally acceptable for a universal
audience (Goodall, 2014). What trolley problems have shown is that the debate concerning
HAV responsibility will require substantial reconsideration of the foundations of ethical
responsibility and a willingness on the part of political actors to make difficult, ethically fraught,
choices in the midst of philosophical uncertainty. The temptation to wait until the philosophers
decide is one that Winner cautions against; policy decisions regarding who is responsible for
what will still need to be made in the midst of ethical debate. The contours of the discussion of
responsibility from the side of the philosophers has been detailed above. In the rest of this
article, we show how these points of discussion, whether of inscribing or ascribing
responsibility, or of solving the trolley problem for HAVs, have not been meaningfully
incorporated into policy debate, even when they have been incorporated into technologists’
discussions.
Data and methods
As described in the introduction to this article, the purpose of this analysis is to assess the
difference between technological and policy discussions on the matter of ethical responsibility
for highly automated vehicles. Following a hypothesis from science and technology scholars,
such as Winner, we posit that there is a measurable difference between policy and technology
discussions and the difference can be measured as a matter of narrative consistency between
the relevant actors on key indicators, such as technology failure.
With respect to relevant actors, while stakeholders to the policy process for highly automated
vehicles include representatives from all sides of the political process and the public sphere, we
limit our analysis here to consideration of institutions and actors with a direct stake in the U.S.
federal policy making process.
3
Following theorists of science and technology public policy
making, we include lawmakers and the executive organs of the implementation of law, we also
include the courts as a check on both legislators and bureaucrats’ implementation strategies.
We also include regulated industry, here engineers, entrepreneurs, and industry executives, as
lobbyists to regulated industry influence legislators, comments to proposed rules by industry
has considerable influence, and court cases brought by regulated industry shape the agenda for
other policy institutions (Jasanoff, 2009).
desirable characteristics to the driver (e.g., Samaritan speeding an injured person to accident & emergency
departments) or undesirable characteristics (e.g., bank robber), or insertion of third persons into the situation to
prevent harm to either the down-stream individuals or the lever puller (e.g., the “fat person thrown from a bridge to
stop the trolley” example) (Bleske-Rechek et al 2010; Gold, Colman and Pulford 2014).
3
While state and local actors are undeniably important, the final authority for safety specific regulations will, given
policy history, lie at the federal level in the US.
12
With respect to indicators of narrative consistency, we use a corpus level analysis of discussions
between these actors on the subject of responsibility for HAVs. As posited by Wolin, we
incorporate both common sense and technical or philosophical terms into our search for
ethically significant policy discussions (Wolin 2016, 16). We use a wide range of possible terms
for HAVs and incorporate multiple terms associated with discussions of responsibility, as
reviewed in the previous section and further discussed in methodology below.
Data
We use data from 14 corpora in our analysis (see Table 1 and Table 2 for details). A corpus is a
collection of documents that have been processed and put into an analyzable format, in this
case a searchable database structure. Processing the documents consists of organizing them,
getting them into a consistent format, normalizing the character strings, and indexing all of the
tokens (words) in order to facilitate search and retrieval (Darwin, 2008).
The Congressional corpora are organized into Congressional sessions. These include all the
public documents and transcripts for the 107th through 114th Congress.
Table 1: Congressional Data
Corpus
Websites and Documents Used for Corpus
Files
Words
107th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 107th Congress, 2001-2002
10,458
778,267,473
108th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 108th Congress, 2003-2004
11,701
474,711,027
109th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 109th Congress, 2005-2006
12,754
328,706,743
110th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 110th Congress, 2007-2008
14,979
327,804,803
111th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 111th Congress, 2009-2010
15,630
341,006,347
112th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 112th Congress, 2011-2012
13,510
305,037,514
113th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 113th Congress, 2013-2014
11,591
258,312,326
114th
Congress
Congressional Record, Hearing Transcripts, Documents,
and Reports from 114th Congress, 2015-2016
5,483
167,500,507
Average
12,019
312,412,215
Table 1 shows a description of the content, the number of files, and the number of words for
each of the Congressional Session corpora. Variations in the number of files and words can be
13
due to several factors. The first is that publishing documents and reports from Congress can be
delayed. The second may be due to variations in the work load, number of hearings, and
publications of each Congressional session.
In addition to the Congressional Corpora, corpora for the U.S. federal public administration
were used. The Public Administration corpora are organized by Presidential Term (Please see
Table 2). The Federal Public Administration corpora consist of blogs and speech transcripts
from each federal bureau, agency, and department, including all independent government
agencies (e.g., CIA, NASA).
Table 2: Public Administration, Engineers, and Silicon Valley Data
Corpus
Websites and Documents Used for Corpus
Words
Public
Administration,
Bush 43 Term 1
Blogs and speeches from all departments,
agencies, and bureaus of the U.S. federal
government, 2001-2004
761,923,475
Public
Administration,
Bush 43 Term 2
Blogs and speeches from all departments,
agencies, and bureaus of the U.S. federal
government, 2005-2008
755,254,671
Public
Administration,
Obama Term 1
Blogs and speeches from all departments,
agencies, and bureaus of the U.S. federal
government, 2009-2012
979,189,724
Public
Administration,
Obama Term 2
Blogs and speeches from all departments,
agencies, and bureaus of the U.S. federal
government, 2013-2015
705,398,144
Engineers
American Society of Mechanical Engineering
(ASME), Institute of Electrical and Electronics
Engineers (IEEE) Spectrum, Engineering.com,
Society of Automotive Engineers (SAE)
14,097,589
Silicon Valley
NY Times Bits Blog, NY Times Open Blog,
Sanjose Blogs, SFgate, Siliconbeat, Silicon
Valley Insider, SVDS Blog, Tech Crunch,
Valleywag, Wired, ZDNet
56,746,343
To understand the conversations and concerns of technical developers, two separate corpora
were gathered. The first is an Engineers corpus, which consists of all blog and article postings
from four engineers’ websites. The ASME, IEEE, and SAE websites were specifically chosen
because these are the professional associations most dominate automotive engineering. The
SAE develops many of the professional standards used in the automotive industry.
Engineering.com is a website that is targeted towards engineers and designers and is dedicated
14
to discussing technical issues (Engineering.com, 2016). A second Silicon Valley corpus was also
developed that incorporated the discussions of technical and engineering issues pertinent to
technological development. This corpus used blogs and articles from websites specifically
targeting high tech developers and engineers. For these corpora, the blogs and articles from
each of the sites was gathered going back to either 2010 or as far as the website had published
its content
Methodology
Natural language-based corpora are extensive, context rich, sources of information. Text data
analytics is one method of analyzing the content of a collection of natural language corpora.
Text data analytics brings together both qualitative and quantitative methods, in which the
complexity of text data is accounted for through careful use of search terms and concept
(multi-term) searching.
Text is classified as unstructured data although language is not necessarily unstructured, as
much as its structure is very complex (Barry, 2008). Text data is complex for several reasons
(Schuelke-Leech & Barry, 2016). The first reason is that text exists in many different types of
files (text, pdf, word, etc.). The second reason is that language, particularly concepts in the
technological realm, is constantly changing and evolving. For example, in the space of the
composition of this article, the NHTSA introduced a new way to discuss the technology of a
Highly Automated Vehicle or HAV (which was the term of art in the 2016 document), the
“Automated Driving System” or ADS (NHTSA 2017). The third reason is that language is
infinitely variable and innovative [7], meaning the concept under investigation can be discussed
in a wide variety of ways and changes with time and context. For example, the word “crash”
has a very different meaning in a financial corpus versus a transportation corpus. Even in a
financial corpus, crash had a different meaning before and after the dramatic stock market
decline in October 1929. Thus, any research in a language-based corpus must investigate and
determine how the concept of interest is used in context, not merely how frequently common
key words and search terms are used.
To investigate concepts in a language-based corpus using linguistic-based text data analytics, as
is done in this paper, researchers must create lexicons. A lexicon is a group of search words
logically organized to investigate a concept. Creating a productive lexicon is a time-consuming
process. Our lexicon was built iteratively and inductively, starting with the professional
knowledge of the authors in automotive engineering, applied ethics, and public policy. This was
then augmented through careful review and catalog of the concepts in the literature used to
construct this paper. Next, we iterated the terms, individually and jointly as concepts, through
corpora to ensure robustness and accuracy. For example, the same word can have a very
different meaning in a different context or concept search. For instance, the term “green
truck” means something different in a sustainability context than a military context. The search
terms were validated during the iterative investigation to ensure that the returns were related
to the concept under investigation.
15
For concretely explaining even the most straight-forward concepts, we need a vast collection of
lexicons and a range of syntactic structures. For instance, conversations about innovation do
not generally include the terms “invention,” and “scientific discovery”, yet these are undeniably
concepts associated with innovation. Therefore, analyzing text data must either account for the
complexity of language or else simplify the data in order to ease search and retrieval and
measuring (Schuelke-Leech & Barry, 2016). While many text analysis programs remove noise
words (also called “stop words”) to simplify the data, thus making it easier for basic searching
and measuring and overall computer processing, linguistics-based text data analytics leaves all
words in corpora, which allows investigations of key words within a richer context.
Table 3 lists the search terms for each of the concepts. As can be seen in Table 3, we used a
capacious set of terms derived from our examination of the literature associated with the
discussion of vehicle safety and responsibility (see works cited). While individual components
of the lexicon-- words or concepts-- can be used, proximity searches between concepts are also
used. Proximity searches can be broader or narrower depending on how far apart the search
terms are (Schuelke-Leech & Barry, forthcoming). The “/” with a number after it indicates the
number of words between the two words that are allowable in a search. A “*” indicates a
wildcard, in which any ending is allowed. So “driv*” would include the words: drive, drives,
driver, drivers, driving, driven (etc.).
A raw count of the occurrences of the search terms is not necessarily useful when comparisons
are being made. For instance, 1,000 occurrences in a corpus with 100,000 words means that
one in ten words is relevant to the search. However, the same number of 1,000 returns in a
corpus of 10,000,000 words would now be much less relatively speaking. Therefore, reporting
raw occurrences can be misleading in comparisons. Instead, a normalized count of occurrences
per million tokens is used. This normalized count provides a means of comparing the density of
the conversation across corpora.
Our returns are further constrained by examining how our terms pertinent to HAVs and
responsibility figured in the context of the overall discussion of cars, transportation, and
vehicles. To show this, we created a baseline estimation of how often cars, vehicles, and
transportation are discussed in the corpora. This baseline was estimated by using a lexicon for
“Car”. The purpose of this search was to see whether cars and transportation are a relatively
big or small conversation within each corpus. If cars and transportation were a small
percentage of the corpus, then it stands to reason that if a large amount of that small
conversation were about HAVs, then this would be a significant source for assessing the HAV
narratives.
We next looked at the topic of car safety. The goal of the search was to determine how much
safety was a topic of conversation relative to cars, vehicles, and transportation. That is, was
this a concept that the different domain communities actively addressed? We examined safety
by doing a proximity search of words used in the “car” search within the words identified with
safety. The size of the proximity can be relatively small (3-5 words) or large (50-100) words. In
16
a close proximity search, the words are close, and therefore, you gave few false positives
(returns that do not cover the concept being investigated or a Type 2 error). However, you can
also miss many relevant occurrences (Type 1 error). Thus, defining the proximity range is
always a trade-off between Type 1 and Type 2 errors, with the goal of minimizing the overall
error. The larger the defined distance between the words, the greater the number of returns.
However, these will also yield returns that do not directly relate to the concept being studied.
We used a proximity of 10 words to begin with based on our preliminary review of this topic.
Next, we looked at the concept of driver responsibility. This was search was not to imply that
the federal government currently has responsibility for licensing human drivers; this is clearly a
state responsibility. As explained, responsibility is shifting from the human driver to a
technological system, comprised of algorithms, sensors, digital and wireless communications,
and coordinated automated interactions and responses. We were curious to ascertain whether
there is a difference in the foundational conversation about the consequences of a shift from
human-centric system, where control and operation are ascribed to a human driver, to a
techno-centric one, in which responsibility is inscribed into the machine. We used a wide range
of concepts for responsibility as detailed in Table 3.
Next, we examined the range of technologies that might enable or disable drivers’ responsible
piloting of a vehicle, whether conventional or HAV. In the lexicon for “Car or Vehicle
Technologies”, we included the features described in the SAE and related NHTSA documents for
all levels of automation in vehicles. Further, we reviewed closely a sample of pieces from the
engineers and technologists’ corpora to determine relevant terms for technological
enhancement of conventional vehicles and for projections of HAV capabilities. This lexicon is
larger than the others because of the specificity of required terms but also because of the
variations in the ways in which these technologies are described by various manufacturers and
writers.
We then looked at whether car technology failure was a topic of consideration in the corpora.
Highly automated vehicles are often promoted as a means for overcoming human faults and
errors. As explained in a statement by Mitch Bainwol, President and CEO of Alliance of
Automobile Manufacturers, on October 21, 2015, to the Hearing before the Subcommittee on
Commerce, Manufacturing, and Trade of the Committee on Energy and Commerce, “Examining
Ways to Improve Vehicle and Roadway Safety,” HAVs are touted as being able to solve the
problem of crashes caused by human error:
Crash-avoidance and connected-vehicle technologies offer us the opportunity to address
the 94 percent, if not more, of all accidents that NHTSA attributes to driver error. That is
right, addressing driver error is absolutely crucial. You know the statistics. More than
32,000 people died in car crashes last year, far too many. That number is 25 percent
below what it was a decade ago, but it is still far too many. NHTSA has said that
connected vehicles have the potential to mitigate as much as 80 percent of non-
impaired crashes. And just last week, the Boston Consulting Group released a study that
17
Ann Wilson will talk about showing that advanced driver-assist systems could prevent
almost 10,000 fatalities and 30 percent of all crashes occurring annually in the U.S.
[Statement of Mitch Bainwol (Examining Ways to Improve Vehicle and Roadway
Safety, October 21, 2015, p. 19)]
We assumed that a narrative of technological superiority versus a narrative of technological
failure would show some of the differences between technologists and policy experts.
Lastly, we examined how highly automated vehicles are discussed in the various corpora.
Table 3: Concept Search Terms for Highly automated vehicles Corpora Analysis
Term
Concept Search Terms
Car
automobile*; automotive*; bus /10 grey hound; bus /10 greyhound; bus /10
transit; buses /10 transit; bus /5 transportation; buses /5 transportation; car;
cars; grey hound bus; greyhound bus; motor coach; motorcycle*;
motor /5 vehicle; motor /5 vehicles; passenger* /5 vehicle*; sport utility;
suv; suvs; tractor trailer; transit; transport*; truck; trucks; vehicle /3 motor;
vehicles /3 motor; vehicle /3 passenger*; vehicles /3 passenger*
Safety
crashworth*; decreas* /5 accident*; decreas* /5 crash*; decreas* /5
collision; decreas* /5 death*; decreas* /5 fatalit*; decreas* /5 injur*;
decreas* /5 risk*; diminish* /5 accident*; diminish* /5 crash*; diminish* /5
collision;
diminish* /5 death*; diminish* /5 fatalit*; diminish* /5 injur*; diminish* /5
risk*; fmvss; lessen /5 accident*; lessen /5 crash*; lessen /5 collision; lessen
/5 death*; lessen /5 fatalit*; lessen /5 injur*; lessen /5 risk*; reduc* /5
accident*; reduc* /5 crash*; reduc* /5 collision; reduc* /5 death*; reduc* /5
fatalit*; reduc* /5 injur*; reduc* /5 risk*; safe*
Driver
Responsibility
car /3 responsib*; car /5 negligence; cars /3 responsib*; cars /5 negligence;
distracted driver*; distracted driving; drive /5 negligence; drive /5
responsibility; drive /5 responsible; drive /5 text*; driv* /5 human error*;
driv* /5 human fail*; driving /5 negligence; driving /5 responsibility; driving
/5 responsible; driving /5 text*; driver* /5 negligence; driver* /5
responsibility; driver* /5 responsible; driver* /5 text*; driving /3 drunk;
driving /3 impaired; driving under the influence; drunk drive*; drive* /5 at
fault; driving /5 at fault; human error /5 driv*; human failure /5 driv*;
negligence /5 drive*; owner operator /3 responsibility; reasonable driver;
responsib* /3 automo*; responsib* /3 car; responsib* /3 driv*; responsib*
18
/3 truck*; responsible driver; responsibility /3 driver; reasonable driving;
responsible driving; responsibility /3 driving; reckless driv*; text* /5 driv*;
truck* /5 responsib*; truck* /5 negligence; vehicle* /3 responsibility;
vehicle* /5 negligence
Car or Vehicle
Technologies
active head restraint*; adaptive cruise control*; adaptive Headlights; airbag;
air bag; anti lock Braking Systems; antilock; auto dimming Rearview Mirrors;
automatic Braking; alert vehicle technolog*; assist system*; automo*
ensor*; automo* night vision; automo* /3 tcas; automatic crash /2 brak*;
automatic brak* /5 vehicl*; back* /3 camera; backup sensor*; back up
sensor*; blind spot detect*; blind spot monitor*; blind spot sensor*; brake
assist*; echno*; brake assist* system*; car /3 assist system*; car /3 backup
camera*; car /3 back up camera*; car /3 night vision; car sensor*; car /3
tcas; car /3 voice recognition; cars /3 assist system*; cars /3 backup
camera*; cars /3 back up camera*; cars /3 night vision; collision avoidance
system*; collision avoidance techn*; collision notification*; collision
mitigation; collision system*; collision warning; crash avoid* /2 techno*;
crash avoid* /2 system*; driver alert system*; driver alert techno*; driver
assist* /2 system*; driver assist* /2 techno*; driver technolog* interface*;
driver vehicle interface*; drowsy driver system*; edr techno* /3 vehicl*;
electronic stability control; emergency response system*; emergency
response technolog*; Energy Management; fatigue /2 technolog*; Forward
Collision Warning; Forward Collision Autobrake; Headlight* /3 automatic;
intel* vehicle;
intellidrive*; lane departure warning*; lane departure; Preventure;
mitigation advanced techno*; mitigation /2 technolog*; night vision /2 car;
night vision /2 automo*; night vision /2 truck; obstacle avoid*; obstacle
detection;
obstacle detect*; obstacle sensor*; pedestrian detect*; Pretensioner;
proximity warning system*; Reverse Sensor;
Reverse Camera; rollover mitigation; run flat tire*; smart vehicle highway /2
system*; stability control*; tcas; telematic*; traction control*; truck /3 assist
system*; truck /3 backup camera*; truck /3 back up camera*; truck /3 night
vision; truck sensor*; truck /3 tcas; truck /3 voice recognition; trucks /3
assist system*; trucks /3 backup camera*; trucks /3 back up camera*; trucks
/3 night vision; throttle position sensor; vehicle automation; vehicle
information system*; vehicle stability system; vehicle /2 infrastructure
comm*; vehicle to vehicle /2 com*;
vehicle to vehicle /2 system*; voice activate*; voice recognition; v2v /3
com*; v2v /3 system*; v2v /3 techno*
19
Technology
Failure
design flaw; design /3 defect; malfunction; manufactur* /3 negligence;
manufactur* /3 negligent; negligence /3 manufactur*; negligent /3
manufactur*; product defect; product liability; technical /3 failure;
technology /3 failure;
technology /3 defect
Driverless cars
automated car; automated cars; automated vehicle*; autonomous car*;
autonomous vehicle*; car /3 automated; car /5 autonomous; cars /5
autonomous; car /5 driverless; cars /5 driverless; car /5 self-driving; cars /5
self-driving; driverless car; driverless cars; driverless vehicl*; google car;
google cars; HAV, intelligent vehicl*; self-driving car; self-driving cars; self-
driving vehicle*; smart car; smart cars; smart vehicle*; unmanned car;
unmanned cars; vehicle* /3 automate*; vehicle* /5 autonomous; vehicle* /5
driverless; vehicle* /5 self-driving
Findings
Are policymakers sleepwalking into a situation where technologists’ press forward for
development of a highly disruptive technology (HAVs) will alter our transportation
infrastructure, policy, and ethical concepts? Our analysis focused on measuring the difference
in the discussions about responsibility for highly automated vehicles use within the US
Congress, the federal bureaucracy, and among engineers and technological entrepreneurs. We
proposed that a greater level of attention to HAVs and responsibility for HAVs in discussions by
engineers and technologists, over the level found in discussions by legislators and bureaucrats
would be a signal of “technological somnambulism”.
As explained in the methods section, we present our results as “occurrences per million words,”
which is a measure of the density of a conversation within a given corpus. It provides a
normalized measure that allows for comparisons among corpora. The relationship between
conversations was also explored by looking into specific examples.
Congress
From the 107th through the 114th Congress, an average of 12,019 documents in each session
and an average of 312,412,215 words were reviewed. The results for the analyses are shown in
Table 4. In each of the Congressional sessions between the 107th (2001-2002) and the 114th
(2015-2016), there is a robust discussion of cars, vehicles, and transportation with an average
occurrences of the words in the car search of 1,080.4 per million tokens. There was a
substantial variation during the sessions, with a low of 566.4 occurrences per million tokens in
the 113th Congressional session, and a high of 1,567.6 occurrences per million tokens in the
20
109th Congressional session. These results show that the baseline discussion (see above) is
robust. Much of this conversation is about the transportation industry and issues within the car
industry. For instance, in the 110th Congressional session, which occurred in 2007-2008 at the
height of the recession, there was a substantial amount of discussion in Congress about bailing
out automakers and protecting jobs within the auto industry. This conversation has little to do
with HAVs. Instead, the discussions around the specific areas of interest are done in separate
searches.
Table 4: Results from Congressional Corpora
Corpus
Cars
Car
Safety
Driver
Responsibility
Car
Technologies
Car
Technology
Failure
Driverless
Vehicles
107th
Congress
879.2
57.5
1.6
5.1
0.0
0.3
108th
Congress
1,222.8
56.7
1.7
4.0
0.0
0.2
109th
Congress
1,567.6
57.2
1.5
3.8
0.0
0.3
110th
Congress
1,354.9
39.5
0.9
2.2
0.0
0.2
111th
Congress
1,199.8
44.5
4.0
2.1
0.0
0.2
112th
Congress
1,072.5
47.9
3.3
1.1
0.0
0.2
113th
Congress
566.4
31.9
1.8
2.7
0.0
1.4
114th
Congress
780.1
63.7
3.2
8.1
0.0
6.3
Average
1080.4
49.9
2.3
3.6
0.0
1.1
Issues related to vehicle safety are discussed an average of 49.9 times per million words, or
about 0.005% of the overall Congressional conversation. Yet, vehicle safety is a topic of
discussion for approximately 5% of the time that cars are discussed. With particular reference
to driver responsibility, this concept is discussed an average of 2.9 times per million words in
the Congressional sessions, while car technologies are discussed an average of 3.6 times per
million words. Looking at the results for driver responsibility and car technologies, these are
much smaller discussions overall and with respect to the discussion of cars themselves. Figure
1 shows variations in the conversations around car safety, driver responsibility, car
technologies, and the failure of car technologies.
21
It is noteworthy that search terms for car technology failures were mentioned only seven times
for all the Congressional sessions. That is, technological failure in vehicles is simply not in the
purview of the Congressional conversation.
Figure 2: Car Discussions in Congress
As one might expect, there is an increase in the discussions of HAVs over time, particularly since
the 112th (2011-2012) Congressional sessions. Google launched its self-driving vehicle project in
2012 (Davies, 2016). However, the lack of discussion on car technology failures, whether with
respect to conventional or highly automated vehicles, indicates that there is a low level of
attention to this in the legislators’ narratives. Instead, the lack of concern about failure is
overwhelmed by the great optimism about HAVs.
Take, for example, the comments by Ranking Member Ms. Eddie Bernice Johnson,
Subcommittee on Research and Technology, Committee on Science, Space, and Technology,
House of Representatives, June 12, 2015, in the hearing on “U.S. Surface Transportation:
Technology Driving the Future,”:
We are living in a time that is truly transformational for all modes of transportation.
When I think about the potential benefits of connected vehicle technology, I don’t think
it’s too lofty to compare its potential impact to the impact of the Eisenhower Interstate
Highway System 60 years ago on connecting goods and people across the nation. As our
population grows, so too is access to public transportation and ridesharing options.
22
From highways, to public transportation, to railroads, research and development of
innovative technologies and policies can improve the safe and efficient movement of
people and freight. It is equally important to implement policies that support long-term,
advanced research that will lead to revolutionary improvements to our transportation
systems.
[(U.S. Surface Transportation: Technology Driving the Future, Hearing Serial
Number: 114-23, 2015, p. 19)]
Congressional discussions of HAVs seem to be quite limited. This is of concern as it points to
some evidence for a sleep-walking approach to HAVs, whether driver responsibility policy or
other policy areas.
Public Administration
While Congress has authority over the laws that federal agencies implement, we reviewed the
Public Administration corpora from the perspective of their role in the executive branch. We
did so by organizing the corpora by presidential administration. The results from the Public
Administration corpora are shown in Table 5, with a visual representation in Figure 2.
Table 5: Results from Public Administration (PA)
Corpus
Cars
Car
Safety
Driver
Responsibility
Car
Technologies
Car
Technology
Failure
Driverless
Vehicles
PA 2001-
2004
552.3
85.7
1.2
12.3
0.2
0.2
PA 2005-
2008
422.1
87.5
1.7
14.9
0.5
0.2
PA 2009-
2012
280.5
72.6
3.7
11.1
0.4
0.3
PA 2013-
2015
403.6
74.2
2.6
9.5
0.3
0.5
Average
414.6
occurrences/
million
tokens
80.0
2.3
12.3
0.4
0.3
The baseline discussions in the Public Administration are not high relative to the overall corpus,
or with respect to the Congressional corpus. However, the specific conversations around
safety, responsibility, and technology are greater, with two notable exceptions. The first is that
23
the discussions where drivers and responsibility are close to one another are proportionally the
same to that in Congress, at 2.3 occurrences per million words. The second is that discussion of
driverless vehicles are lower overall in the public administration than in Congress. However,
this changed in 2016 with the promulgation of the NHTSA documents. As will be discussed
more fully below, without this specific document and related discussions there was little overall
development of an administrative narrative on HAVs.
Figure 3: Car Discussions in Public Administration
The discussions in the agencies of the US public administration regarding car safety were,
however, higher than those in Congress. Safety was not considered often as part of the
discussion of car technologies: car safety discussions were about eight times those of car
technologies. In turn, car technology discussions were about ten times that of driver
responsibility, which in turn were about ten times the amount of discussions of car tech failures
or HAVs.
There are generally two types of failures that result in vehicle crashes: human failure or
technological failure. Human failures are common causes of crashes. In this case, driver
responsibility includes discussions of distracted driving, driver negligence, human error, reckless
driving, and driving under the influence. Car technology failures include discussions of design
flaws, duty of care, company liability, technical failures, and warranty issues. In these corpora,
driver responsibility versus technological failure have interesting and significant variations. The
24
first, and perhaps most important result, is the relatively infrequent discussions about
technology failure. This implies that there is little consideration of the consequences, ethical
implications, or potential flaws of these technologies. Instead, failure is a feature of uniquely
human activity.
The NHTSA discussion in 2015 and 2016 of HAVs was part of a flurry of activity in that
year related to autonomous systems technology. During the period of the 113th Congress (3
January 2013 until 3 January 2015), the FAA addressed the problem of the 333 exemptions,
mentioned in the opening paragraphs of this paper, which were granted to the operators of
Unmanned Aerial Vehicles to conduct flights within the National Airspace System for various
commercial or research purposes. NHTSA and the FAA were not alone in their discussion of
autonomous systems and autonomous vehicles: somewhat unexpectedly, the Bureau of Ocean
Energy Management was a key administrative player. The BOEM reviewed 54 requests for the
use of autonomous vehicles and autonomous underwater vehicles for infrastructure and drilling
site explorations (e.g., BOEM-2013-0031-0001). Military autonomous systems also motivated
action by the Bureau of Industry and Security to restrict export of autonomous aerial guidance
for helicopters (BIS-2014-0033-0001).
In September 2016 the NHTSA, alongside the Transportation Research Board, defined the
research agenda on driverless cars and issued an update to the “Preliminary Statement of
Policy Concerning Automated Vehicles” issued in 2013 (NHTSA, 2016a). The unusual step of
setting out a research agenda (8 “research areas”) by the NHTSA brought to the fore some of
the concerns surrounding driver responsibility. Specifically, in research areas 1, 2, 3, and 4,
issues of driver attentiveness, driver abuse of autonomous systems, and driver training are
posed as significant topics for further research and regulation. These topics were taken up in
the NHTSA public meeting on automated vehicle operational guidance (8 April 2016), wherein
“determination of operational boundaries of the system” (92) was a responsibility directly
assigned to the driver. Government responsibilities were described as addressing security and
resilience of transportation infrastructure (138) and providing guidance on regulatory and
policy issues (194), manufacturers were charged with responsibility for reporting and
notification concerning safety of HAVs (188), and engineers were responsible for ethical and
safety concerns (33, 40). Companies manufacturing the vehicle are expected to be held
accountable for any problems with the vehicle’s design, manufacturing, or use (52).
Also in September 2016, the U.S. Department of Transportation (US DOT) released Federal
Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety [1]. The DOT
uses the SAE Levels of Automation, developed by the industry experts, as the regulatory
standard. At this point, the DOT is essentially putting the responsibility back to the Original
Equipment Manufacturers (OEMs) to ensure the safety of their HAVs, just as they do with
conventional vehicles. The DOT outlines responsibility for compliance this way:
Under current law, manufacturers bear the responsibility to self-certify that all of the
vehicles they manufacture for use on public roadways comply with all applicable Federal
25
Motor Vehicle Safety Standards (FMVSS). Therefore, if a vehicle is compliant within the
existing FMVSS regulatory framework and maintains a conventional vehicle design, there
is currently no specific federal legal barrier to an HAV being offered for sale. However,
manufacturers and other entities designing new automated vehicle systems are subject
to NHTSA’s defects, recall and enforcement authority. DOT anticipates that
manufacturers and other entities planning to test and deploy HAVs will use this
Guidance, industry standards and best practices to ensure that their systems will be
reasonably safe under real-world conditions.
[(Department of Transportation 2016, p. 11)]
For all HAV systems, the manufacturer or other entity should address the cross-cutting
items as a vehicle or equipment is designed and developed to ensure that the vehicle has
data recording and sharing capabilities; that it has applied appropriate functional safety
and cybersecurity best practices; that HMI design best practices have been followed;
that appropriate crashworthiness/occupant protection has been designed into the
vehicle; and that consumer education and training have been addressed
[(Department of Transportation 2016, p. 13)]
Manufacturers and other entities should follow a robust design and validation process
based on a systems-engineering approach with the goal of designing HAV systems free
of unreasonable safety risks. This process should encompass designing the intended
functions such that the vehicle will be placed in a safe state even when there are
electrical, electronic, or mechanical malfunctions or software errors.
[(Department of Transportation 2016, p. 20)]
These policy documents are important components of the federal government’s response to
the development and deployment of HAVs. What is evident from the US DOT document is that
the locus of responsibility for safety is with the OEMs, and by extension, with the engineers and
developers.
Clearly, it would be shortsighted to conclude from the relatively small number of discussions
that public administrators are not engaged in any discussions of HAVs. The small number of
conversations were quite dense in terms of consideration of both technology and ethics. The
conversations between public administrators and engineers were not technically reflective and
were generally positive about the incorporation of technology, without substantial discussion of
the consequences of potential technology failure. Any oversight of the larger systemic
implications or societal disruptions of HAVs seems to be poised to be will be reactive in nature.
26
That is, they will come through the courts and litigation, rather than through a proactive
assessment of potential risks and benefits (see McCubbins & Schwartz, 1984). A final example
from NHTSA shows this well:
NHTSA s Response: After careful consideration, NHTSA believes that it is essential that
test participants be instructed that the drivers primary responsibly is to drive safely at
all times and therefore is keeping the test participant instructions as they were proposed
in the Initial Notice.
[Federal Register April 26, 2013. (NHTSA, 2013)]
Engineers and Silicon Valley
Engineers’ discussions focused on the development and commercialization of autonomous
systems technologies. Two corpora were built for this analysis. The first corpus was built using
published conversations between engineers involved in the development of mechanical,
electrical, and automotive systems. The second is of innovators and technological developers
in Silicon Valley. The websites used for each of the corpora is listed in Table 2.
Engineers discuss HAVs 519.2 times per million words, whereas they are discussed 112.2 times
per million words in the Silicon Valley corpus (see Table 6 and Figure 3). HAVs are discussed
significantly more often than any of the topics of car safety, driver responsibility, car
technologies, or car technology failures. Notably, car technology failure is not a topic of
discussion.
Table 6: Results from Engineers and Silicon Valley
Corpus
Cars
Car
Safety
Driver
Responsibility
Car
Technologies
Car
Technology
Failure
Driverless
Vehicles
Engineers
2448.7
95.4
164.2
9.0
0.0
519.2
Silicon
Valley
582.0
21.6
23.9
7.0
0.0
112.2
27
Figure 4: Car Discussions in Engineers and Silicon Valley
The algorithms that engineers and entrepreneurs develop are essentially the “decisions” and
“choices” that the technology will make. For HAVs, developers will need to program the actions
and reactions that vehicles will take. An important question becomes whether these actions
are based on minimizing damage to all the vehicles and humans involved, or only those of one
specific vehicle. In systems, one of the difficulties is that individuals can focus on optimizing
their local system, rather than optimizing the overall system. In the past, consumers have had
adverse reactions to bureaucracies making decisions that life-and-death implications. It is
unclear what their reactions will be when technologies, machines, and intelligent systems are
making these decisions.
Within most of the public conversations concerning HAVs, these technologies are offered as a
solution to existing transportation problems. These conversations appear to reflect the
proposals of engineers: unlike the Congressional and Public Administration corpora, there is
significant overlap between the conversations of car safety, driver responsibility, and HAVs. For
instance:
Driverless tractors and autonomous systems free farmers from the most repetitive and
boring work and allows more acreage to be worked for longer time periods, notes
Brown. Rather than having people work a field over a 12-hour time period or longer, the
task can now be automated. It also eliminates human error in being able to drive a
straight line for planting or harvesting, or in putting down only enough fertilizer.
[ASME Blog (Kosowatz, 2012)]
28
Jay Joseph, Senior Manager, Product Regulatory Office, Honda, explained during the
session that he views V2X as the next, natural progression from today’s cars equipped
with sensors connected to brakes, to autonomous driving. V2X connectivity has the
potential to prevent even the possibility of crash, he said. Joseph also pointed out that
the next generation of drivers may have a different view of distracted driving. “For the
next generation of drivers, driving is the distraction,” he said.
[SAE article (SAE, 2012)]
We’ve already solved the problem of unreliable human beings with self-driving cars.
Self-driving cars are real. All of the biggest tech companies are working on them, and it is
only a matter of time until we see Google-like cars everywhere. While self-driving cars
are neat, they are not nearly as fascinating as their potential successor: self-flying cars.
[Techcrunch article (Aube, 2016)]
This rosy view of highly automated vehicles and technologies may prove problematic.
Technologies fail, particularly as a system becomes more complex (Woods et al., 2010). Proper
oversight is essential in ensuring that the development of technologies will serve the public
good, not just the commercial market.
Narratives
Our analysis focused on deductively identifying patterns in a large body of text, not on
identifying a consistent narrative through inductive, close reading. What we identified is that
there are two persistent narratives that are significant for an assessment of “technological
somnambulism” on the part of policymakers. First, is that there is less concern about the
possibility of technological failure among policy makers (legislators and administrators) than
among technologists. While the overall level of concern about vehicles and transportation is
high, we did not find evidence of a concern about the failures of HAVs. Considerations of fault
and failure are essential to discussions of responsibility: responsibility, accountability and
related concepts do not figure in policy makers’ discussions in the absence of a failure or
scandal. Consequently, a failure to discuss failure indicates an intention to responsibility and
ethics for this technology. Second, the lack of consideration of technological failure is matched
with a rosy view of HAVs as neutral tools to be used for positive ends. Responsibility, whether
viewed from the perspective of making responsible technologies (inscribing) or using
technologies responsibly (ascribing), was not part of the discussion in a meaningful way. From
the perspective of policymaking, a narrative presupposition of neutrality in making and use
suggests a level of inattention that does, seem to portend sleepwalking policymakers.
29
Implications and Conclusions
Are policymakers asleep at the wheel of Highly Automated Vehicles? What can we learn from
this analysis of the body of documents produced by Congress, the U.S. federal public
administration, and technological entrepreneurs? From an overall perspective of the
development and deployment of a disruptive technology, there is little regulatory oversight or
policy guidance ex-ante. While major public agencies like NHTSA and USDOT have issued
guidelines on HAVs, these guidelines are tentative and evolving quickly. For example, in the
2016 guidance, NHTSA suggested that:
The rapid development of emerging automation technologies means that partially and
fully automated vehicles are nearing the point at which widespread deployment is
feasible…Industry plays a key role in this process by both conducting such testing and in
providing data that establish the safety benefits of automation technologies that exceed
the current level of roadway safety. Within six months, NHTSA will propose best-practice
guidance to industry on establishing principles of safe operation for fully autonomous
vehicles (vehicles at Level 4 on the scale established in NHTSA’s 2013 preliminary policy
statement). [(NHTSA, 2016a)]
The 2016 “Federal Automated Vehicle Policy” was replaced in 2017 by the “Automated Driving
Systems 2.0” document. The 116-page 2016 document outlined extensive systems for
regulation of these technologies. These were, in large part, diluted in the 2017, 36 page
guidance document, where a “nonregulatory approach to automated vehicle technology
safety” was outlined alongside “Best Practices for Highway Safety Officials” (2017, ii).
Importantly for this discussion, the Best Practices for Legislatures include avoiding placement of
“unnecessary burdens on competition and innovation” and conducting “review of traffic laws
and regulations that may serve as barriers to operation of ADS (Automated Driving Systems)”
(2017, 21). A review of the December 2016 “Federal Automated Vehicles Policy Public
Meeting” proceedings showed that concerns about responsibility often focused on division
between federal and state responsibility for traffic and licensure. However, an interesting
discussion of the HAV developers showed some sensitivity to the problem of inscribing and
ascribing ethics into HAVs:
“NHTSA urges HAV developers to consider ethical programming for HAVs; however not only can
HAV developer anticipate ethical programs in programming, but states can also take action to
mitigate these ethical crisis [sic]. For example, states should consider laws to prevent people
from purposely disrupting HAV systems. States could adopt a graduated system of laws
criminalizing the intentional disruption of HAV operation” (2016, 55). This approach to states
driving ethical discussions was reiterated in the 2.0 revisions:
“Ethical considerations are essential to automated driving technology development.
However, currently, there is no consensus around acceptable ethical decision-making
given the depth of the element is not yet understood nor are there metrics to evaluate
30
against. NHTSA plans to work with industry, States, and safety advocates to further
research the establishment of an industry developed framework for addressing ethical
considerations and fostering transparency in automated driving technology decision
making. The Agency will also collaborate with industry to develop standard test and
simulation scenarios that culminate in an ethical decision” (“Automated Driving
Systems”, 2017).
This discussion, which places industry and states into the driver’s seat, comes a full five years
after industry began developing and testing fully automated vehicles in earnest. Engineers and
entrepreneurs are going to continue to develop these technologies and regulators will react to
these developments. Engineers and developers are focused optimistically on technological
development and commercialization, often suffering from a blind spot when it comes to
considering the potential problems with these technologies. Congress, and the U.S. federal
public administration, have also suffered from blind spots, both technical and ethical. While
they have been exploring the issue of HAVs in recent years, they are not yet actively overseeing
or regulating it. Instead, policy makers seem to be avoiding the conversations by passing the
responsibility for creating rigorous systems to lower and lower levels.
Interestingly, through our analysis, we also found that agencies which have heretofore had little
involvement in the automotive industry and vehicle safety, will now need to engage in this
area. The FCC is fast becoming an important player in the future of HAVs, overlapping with the
long established role of the NHTSA and the DOT. The FCC has yet to issue guidance on HAVs.
This is being left to NHTSA and the Department of Transportation. However, the FCC is
responsible for the bandwidth through which HAVs communicate (with each other, with
infrastructure, etc.). In July 2016, the FCC voted to open up new Broadband capabilities (called
5th Generation Wireless Broadband or 5G) with an eye to the future demands of mobile
communication technologies, including self-driving vehicles (Commission, 2016). Also in July
2016, the FCC had to issue a public notice asking for comments on a petition by public interest
groups asking for the FCC to prevent the deployment of HAVs by automakers until the FCC
could establish formal cybersecurity standards (Beyoud, 2016). In April 2017, General Motors
announced its intention to create a fleet of 500 self-driving vehicles via a filing with the FCC
(Harris, 2017).
With particular respect to the problem of responsibility with HAVs and the consequences of
technology failure, it is not clear that policy makers are having meaningful conversations about
this topic yet. Responsibility is discussed by stakeholders, but the matter of assigning
responsibility to particular stakeholders for particular actions, or software, is not clearly
defined. Likewise, assignment of responsibility appears to focus on the role of technology as
mediating assignment of responsibility within the existing framework of law. This suggests that
the matter of fully autonomous or highly automated vehicle guidance is not yet a meaningful
part of the policy conversation. While the openness of the horizon does allow engineers and
manufacturers to continue to innovate and for Courts to work out the matters of analogy and
precedent unencumbered by biasing regulatory language, it also leaves the public’s concerns
31
unaddressed as these vehicles emerge to disrupt ordinary flows on roadways and to disrupt our
perceptions about the ethical legal responsibilities attached to driving.
To return to our question-- whether there is a difference between technological and political
discussions of responsibility in HAV technology development and use?we found that there is
a difference in the degree of concern between technologists and those whose rules will shape
the pathways for technological development. We found that the limited discussion by Congress
and Public Administrators suggests that political patternspassing the buck from federal to
state levels—persist in this evocative area of HAV regulation. We also found that technologists’
discussions of HAVs revealed high aspirations for the technology, but not deep discussions of
the implications of the making or use of the technology. These findings seem to support
Winners’ contention that technology policy is made only after a period of sleepwalking by
policymakers and technologists.
There are at least two areas for significant further exploration on this topic that we believe are
important to address here, one substantive and one methodological. First, further exploration
of the incorporation of the philosophers’ discussions of the challenges of HAV responsibility
should be undertaken to show whether either party is more responsive to these concerns after
all. One way to do this is to identify whether, where, and how often, trolley problems (Lin,
2013) appear in these discussions. Second, a corpus level analysis of conversations in state
legislative bodies and state departments of transportation might reveal whether the attempt of
federal governors to provoke states to make ethically challenging policy is working. Given the
new “model state policy” promulgated by NHTSA in 2017, it is possible to do a comparison to
identify narrative differences. The coming years of state policy development in this area will
show the vitality of the text data analytic method for this area.
32
References
FAA Modernization and Reform Act, (2012).
National Highway Traffic Safety Administration. (2013). Preliminary Statement of Policy
Concerning Automated Vehicles. Washington, DC Retrieved from
https://www.nhtsa.gov/staticfiles/.../pdf/Automated_Vehicles_Policy.pdf.
National Highway Traffic Safety Administration. (2016). Federal Automated Vehicles Policy:
Accelerating the Next Revolution in Roadway Safety. Washington, DC: US Department of
Transportation Retrieved from http://www.nhtsa.gov/nhtsa/av/av-policy.html.
Anderson, J. M., Kaira, N., Stanley, K. D., Sorensen, P., Samaras, C., & Oluwatola, O. A. (2014).
Autonomous Vehicle Technology: A Guide for Policymakers. Santa Monica, CA: RAND
Corporation, 2014. . Retrieved from Santa Monica, CA:
http://www.rand.org/pubs/research_reports/RR443-2.html
Aube, T. (2016). Our self-flying car future, Tech Crunch, December 23, 2016. Retrieved from
https://techcrunch.com/2016/12/23/our-self-flying-car-future/
Bar-Yam, Y. (2004). Multiscale variety in complex systems. Complexity, 9(4), 37-45.
doi:10.1002/cplx.20014
Barry, B. (2008). Transcription as Speech-to-text data transformation. (PhD), PhD Dissertation,
The University of Georgia, Athens, GA.
Bellows, B. (2013). Floating toward a Sky near You: Unmanned aircraft systems and the
implications of the FAA Modernization and Reform Act of 2012. Journal of Air Law & Commerce,
78(3), 585-616.
Beyoud, L. (2016). FCC Studying Cybersecurity of Connected Vehicle Tech, July 28, 2016.
Retrieved from https://www.bna.com/fcc-studying-cybersecurity-n73014445537/
Bleske-Rechek, A., Nelson, L. A., Baker, J. P., Remiker, M. W., & Brandt, S. J. (2010). Evolution
and the trolley problem: people save five over one unless the one is young, genetically related,
or a romantic partner. Journal of Social, Evolutionary, and Cultural Psychology, 4(3), 115.
Bonnefon, J.-F., Sharif, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles.
Science, 352(3296), 1573-1576.
Christensen, C. M. (1997). Patterns in the evolution of product competition. European
Management Journal, 15(2), 117-127.
City of New York. (2016). Vision Zero. Retrieved from
http://www.nyc.gov/html/visionzero/pages/home/home.shtml
33
Colonna, K. (2012). Autonomous cars and tort liability. Case Western Research Journal of Law
Technology & Internet, 4, 81.
Commission, F. C. (2016). Rules to Facilitate Next Generation Wireless Technologies. Retrieved
from https://www.fcc.gov/document/rules-facilitate-next-generation-wireless-technologies
Crane, D. A., Logue, K. D., & Pilz, B. C. (2016). A Survey of Legal Issues Arising from the
Deployment of Autonomous and Connected Vehicles, A Report from the University of Michigan
Mobility Transformation Center. Retrieved from
https://www.law.umich.edu/events/automatedvehicles/Documents/University%20of%20Michi
gan%20Discussion%20Draft.April%202016.pdf
Darwin, C. M. (2008). Construction and Analysis of the University of Georgia Tobacco
Documents Corpus. (PhD), PhD Dissertation, The University of Georgia, Athens, GA.
Davies, A. (2016, 29 February 2016). Google's Self-Driving Car Caused its First Crash. Wired.
Dingus, T. A., Klauer, S. G., Neale, V. L., Petersen, A., Lee, S. E., Sudweeks, J. D., & Perez, M. A.
(2006). The 100-car naturalistic driving study, Phase II-results of the 100-car field experiment,
No. HS-810 593. Retrieved from
Dingus, T. A., Neale, V. L., Klauer, S. G., Petersen, A. D., & Carroll., R. J. (2006). The development
of a naturalistic data collection system to perform critical incident analysis: an investigation of
safety and fatigue issues in long-haul trucking. Accident Analysis & Prevention, 38(6), 1127-
1136.
Dworkin, G. B. (1970). Determinism, free will, and moral responsibility.
Engineering.com. (2016). About Us. Retrieved from
http://www.engineering.com/Home/AboutUs/tabid/196/Default.aspx
Examining Ways to Improve Vehicle and Roadway Safety, October 21, 2015, House of
Representatives, 114th Congressional Session Sess (2015).
Federal Aviation Administration. (2016). Section 333. Retrieved from
https://www.faa.gov/uas/beyond_the_basics/section_333/
Gold, C., & Bengler, K. (2014). Taking over control from highly automated vehicles. Advances in
Human Aspects of Transportation: Part II8, 64.
Gold, N., Colman, A. M., & Pulford, B. D. (2014). Cultural differences in responses to real-life
and hypothetical trolley problems. Judgment and Decision Making, 9(1), 65.
Goodall, N. (2014). Ethical decision making during automated vehicle crashes. Transportation
Research Record: Journal of the Transportation Research Board, 2424(2014), 58-65.
34
Grimmer, J., & Stewart, B. M. (2013). Text as Data: The Promise and Pitfalls of Automatic
Content Analysis Methods for Political Texts. Political Analysis. doi:10.1093/pan/mps028
Harris, M. (2017). GM to Launch the World's Largest Fleet of Self-Driving Cars, Documents
Reveal. Retrieved from http://spectrum.ieee.org/cars-that-think/transportation/self-
driving/exclusive-gm-to-launch-the-worlds-largest-fleet-of-selfdriving-cars
Harrison, M. A. (2011). College students' prevalence and perceptions of text messaging while
driving. Accident Analysis & Prevention, 43(4), 1516-1520.
Hart, S. L., & Christensen, C. M. (2002). The Great Leap: Driving Innovation from the Base of the
Pyramid. MIT Sloan Management Review, 44(1), 51-56.
Hayhurst, K. J., Maddalon, J. M., Neogi, N. A., & Vertstynen, H. A. (2016). Safety and
Certification Considerations for Expanding the Use of UAS in Precision Agriculture. Paper
presented at the 2016 International Conference on Precision Agriculture (ICPA), Saint Louis,
MO; United States. https://ntrs.nasa.gov/search.jsp?R=20160010343
Hevelke, A., & Nida-Rumelin, J. (2015 ). Responsibility for crashes of autonomous vehicles: an
ethical analysis. Science and Engineering Ethics, 21(3), 619-630.
IEEE Global Initiative. (2015). Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing
with Artificial Intelligence and Autonomous Systems. Version 1. Retrieved from:
http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf
IEEE Global Initiative. (2017). Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing
with Artificial Intelligence and Autonomous Systems. Version 2. Retrieved from
http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
Jang, S.-W. (2013). Seven disruptive innovations for future industries. SERI Quarterly, 6(3), 94.
Jasanoff, S. (2009). The Fifth Branch: Science Advisers as Policymakers. Cambridge, MA: Harvard
University Press.
Jerome, J. (2016). Test Driving Privacy and Cybersecurity: Regulation of Smart Cars, November
18, 2016. Retrieved from https://cdt.org/blog/test-driving-privacy-and-cybersecurity-
regulation-of-smart-cars/
Kane, R. (1999). Responsibility, luck, and chance: Reflections on free will and
indeterminism. The Journal of Philosophy, 96(5), 217-240.
Katyal, N. (2013). Disruptive Technologies and the Law. Geo. LJ, 102(2013), 1685.
Kim, A., Perlman, D., Bogard, D., & Harrington, R. (2016). Review of Federal Motor Vehicle
Safety Standards (FMVSS) for Automated Vehicles Identifying potential barriers and challenges
35
for the certification of automated vehicles using existing FMVSS, Report Prepared for Intelligent
Transportation Systems Joint Program Office (ITS JPO), National Highway Traffic Safety
Administration (NHTSA). Retrieved from
http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-
enacted-legislation.aspx
Klauer, S. G., Guo, F., Simons-Morton, B. G., Ouimet, M. C., Lee, S. E., & Dingus., T. A. (2014).
Distracted driving and risk of road crashes among novice and experienced drivers. New England
Journal of Medicine, 370(1), 54-59.
Kosowatz, J. (2012). Bringing in the Harvest with Driverless Tractors, ASME Blog, May 2012.
Retrieved from https://www.asme.org/engineering-topics/articles/robotics/bringing-in-the-
harvest-with-driverless-tractors
Lee. T.B. (2017). Waymo makes history testing on public roads with no one at the wheel. Ars
Technica Retrieved from https://arstechnica.com/cars/2017/11/fully-driverless-cars-are-here/
Lin, P. (2013). The ethics of saving lives with autonomous cars are far murkier than you think,
Wired. Retrieved from http://www.wired.com/opinion/2013/07/the-surprising-ethics-of-
robot-cars
Lin, P. (2013). The ethics of saving lives with autonomous cars are far murkier than you think.
Wired.
Marchant, G. E., & Lindor, R. A. (2012). The Coming Collision between Autonomous Vehicles
and the Liability System. Santa Clara Law Review, 52(2012), 1321-1340.
Markides, C. (2006). Disruptive innovation: In need of better theory. Journal of Product
Innovation Management, 52(4), 1321-1340.
McCubbins, M. D., & Schwartz, T. (1984). Congressional Oversight Overlooked: Police Patrols
versus Fire Alarms. American Journal of Political Science, 28(1), 165-179.
Merat, N., Jameson, A. H., Lai, F. C., Daly, M., & Carsten, O. M. (2014). Transition to manual:
Driver behaviour when resuming control from a highly automated vehicle. Transportation
Research Part F: Traffic Psychology and Behaviour, 27(2014), 274-282.
Meseko, A. A. (2014). The Influence of Disruptive Innovations in A Cardinally Changing World
Economy. Journal of Economics and Sustainable Development, 5(4), 24-27.
Narla, S. R. (2013). The Evolution of Connected Vehicle Technology: From Smart Drivers to
Smart Cars to... Self-Driving Cars. Institute of Transportation Engineers. ITE Journal, 83(7), 22.
National Conference of State Legislatures. (2017). Autonomous Vehicles: Self-Driving Vehicles
Enacted Legislation, 2/21/2017. Retrieved from
36
http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-
enacted-legislation.aspx
National Safety Council. (2015). Motor Vehicle Deaths Increase by Largest Percent in 50 Years.
Retrieved from
http://www.nsc.org/Connect/NSCNewsReleases/Lists/Posts/Post.aspx?List=1f2e4535-5dc3-
45d6-b190-9b49c7229931&ID=103&var=hppress&Web=36d1832e-7bc3-4029-98a1-
317c5cd5c625
NHTSA. (2013). Visual-Manual NHTSA Driver Distraction Guidelines for In-Vehicle Electronic
Devices, Federal Register, The Daily Journal of the United States Government, April 26, 2013.
Retrieved from https://www.federalregister.gov/documents/2013/04/26/2013-09883/visual-
manual-nhtsa-driver-distraction-guidelines-for-in-vehicle-electronic-devices
NHTSA. (2016a). DOT/NHTSA Policy Statement Concerning Autonomous Vehicles 2016, Update
to Preliminary Statement of Policy Concerning Automated Vehicles, 2013 Retrieved from
https://www.dmv.ca.gov/portal/wcm/connect/87c75a9f-e1b4-44bf-8df4-
bf871115fc09/AV_Policy_Update_2016.pdf?MOD=AJPERES
NHTSA. (2016b). Secretary Foxx Unveils President Obama’s FY17 Budget Proposal of Nearly $4
Billion for Automated Vehicles and Announces DOT Initiatives to Accelerate Vehicle Safety
Innovations. Retrieved from https://www.nhtsa.gov/press-releases/secretary-foxx-unveils-
president-obama%E2%80%99s-fy17-budget-proposal-nearly-4-billion
NHTSA. (2016c). NHTSA Enforcement Guidance Bulleting 2016-02: Safety Related Defects and
Automated Safety Technologies. Retrieved from:
https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/12507-av_site_fedreg_final-
defects-authority-enforcement-bulletin_2016.0._2.31.43_pm.pdf
NHTSA. (2016d). Vehicle to Vehicle Communications. Retrieved from:
https://www.nhtsa.gov/technology-innovation/vehicle-vehicle-communications
NHTSA. (2017a). Automated Driving Systems 2.0: A Vision for Safety. Retrieved from:
https://www.nhtsa.gov/manufacturers/automated-driving-systems
NHTSA. (2017b). Voluntary Safety Self-Assessment Template. Retrieved from:
https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/voluntary_safety_self-
assessment_for_web_101117_v1.pdf
New York Times Editorial Board. (2017). The F.C.C. Wants to Let Telecoms Cash in on the
Internet. The New York Times, Opinion, December 3, 2017.
SAE. (2012). Connectivity is key to ITS future, SAE Article, April 30, 2012. Retrieved from
http://articles.sae.org/10999/
37
SAE. (2014). Society of Automotive Engineers, Automated Driving Standard J3016. Retrieved
from https://www.sae.org/misc/pdfs/automated_driving.pdf
SAE. (2015). Guidelines for Safe On-Road Testing of SAE Level 3, 4, and 5 Prototype Automated
Driving Systems (ADS), Standard J3018-201503. Retrieved from
http://standards.sae.org/j3018_201503/
Schuelke-Leech, B.-A., & Barry, B. (2016). Complexity of textual data in entrepreneurship and
innovation research. In A. Kurckertz & E. Berger (Eds.), Complexity in Entrepreneurship,
Innovation and Technology Research Applications of Emergent and Neglected Methods (pp.
459-480). New York, NY: Springer.
Schuelke-Leech, B.-A., & Barry, B. (forthcoming). Philosophical and Methodological Foundations
of Text Data Analytics. In M. Dehmer & F. Emmert-Streib (Eds.), Frontiers of Data Science (pp.
459-480). Boca Raton, FL: CRC.
Stubbs, M. (1996). Text and Corpus Analysis: Computer-Assisted Study of Language and Culture.
Oxford, UK: Blackwell Publishers.
Thierer, A. D., & Hagemann, R. (2015). Removing roadblocks to intelligent vehicles and
driverless cars. Wake Forest Journal of Law & Policy, 5(2), 339-392.
U.S. Department of Commerce. (2015). The Digit Group: Commerce Trade Missions Provide
Global Expansion Opportunities, June 1, 2015. Retrieved from
https://www.commerce.gov/news/blog/2015/06/digit-group-commerce-trade-missions-
provide-global-expansion-opportunities
U.S. Department of Transportation. (2016). Federal Automated Vehicles Policy: Accelerating the
Next Revolution in Roadway Safety, September 2016. Retrieved from
https://www.transportation.gov/AV
U.S. Surface Transportation: Technology Driving the Future, Hearing Serial Number: 114-23,
House of Representatives, 114th Congressional Session Sess (2015).
Wilson, F. A., & Stimson, J. P. (2008). Trends in fatalities from distracted driving in the United
States, 1999-2008. American Journal of Public Health, 100(11), 2213-2219.
Winner, L. (2014). Technologies as forms of life. In Ethics and Emerging Technologies (pp. 48-
60). Palgrave Macmillan UK.
Wolin, S. S. (2016). Politics and vision: Continuity and innovation in Western political thought.
Princeton University Press.
Woods, D., Dekker, S., Hollnagel, E., Cook, R., Johannesen, L. J., & Sarter, N. B. (2010). Behind
Human Error, Second Edition. Burlington, VT: Ashgate Publishing Ltd.
38
1. U.S. Department of Transportation. Federal Automated Vehicles Policy: Accelerating the Next
Revolution in Roadway Safety, September 2016. 2016 April 24, 2017]; Available from:
https://www.transportation.gov/AV.
2. Marchant, G.E. and R.A. Lindor, The Coming Collision between Autonomous Vehicles and the
Liability System. Santa Clara Law Review, 2012. 52(2012): p. 1321-1340.
3. Bonnefon, J.-F., A. Sharif, and I. Rahwan, The social dilemma of autonomous vehicles. Science,
2016. 352(3296): p. 1573-1576.
4. Goodall, N., Ethical decision making during automated vehicle crashes. Transportation Research
Record: Journal of the Transportation Research Board 2014. 2424(2014): p. 58-65.
5. Hevelke, A. and J. Nida-Rumelin, Responsibility for crashes of autonomous vehicles: an ethical
analysis. Science and Engineering Ethics, 2015. 21(3): p. 619-630.
6. Lin, P., The ethics of saving lives with autonomous cars are far murkier than you think., in Wired.
2013.
7. Stubbs, M., Text and Corpus Analysis: Computer-Assisted Study of Language and Culture. 1996,
Oxford, UK: Blackwell Publishers.
... In contrast, only 17 studies each deal with society (9%) and economy/services (9%), 15 with health (8%) and 6 each with environment (3%) and mobility (3%). The result is rather surprising, as researchers attribute great potential to AI to change e.g., health (Sun & Medaglia, 2019) and mobility (Schuelke-Leech et al., 2019). Half of all articles on mobility are technology studies. ...
... Similarly, the sample "Mobility" consists of very few studies. The focus of these is on intelligent transport systems (e.g., Schuelke-Leech et al., 2019). In addition, the authors indicate various research implications that do not have a uniform focus. ...
... The articles in the literature base pay little attention to AI in relation to the topics of "environment" and "mobility". This may be because governments have only recently been tasked with overseeing and regulating, for example, AI-based forecasting methods for energy consumption (Sousa et al., 2019) or AI-powered autonomous vehicles (Schuelke-Leech et al., 2019). It is possible that studies on AI are not specifically related to the environment or mobility in the public sector and were therefore not included in previous reviews or covered in our literature search. ...
Article
Artificial intelligence (AI) is becoming increasingly important for the public sector. As the number of studies in this field increases, this study provides a systematic overview of the current literature on AI in the public sector. Therefore, key findings and implications of the literature are highlighted and recommendations for further research are provided. The study is based on a quantitative and qualitative analysis of 189 selected articles. It draws on findings from previous review studies and compares them with new findings from current studies. Overall, it shows that the current state of research is heterogeneous and thematically and methodologically unbalanced. Many studies on AI in the government context focus on governance and administration, while more specific application areas receive less attention. Studies to date focus in detail on changes to existing government structures, while the creation of entirely new structures due to new AI technologies is given less consideration.
... However, such adjustments target the outcomes of incidents once they have already occurred, they do not proactively consider how roadways and vehicles could be designed to account for the comparative levels of risk. Future regulations need to be more proactive in the way in which different transport modes interact as automation threatens to pose significant legal, ethical and moral questions in the event of incident (e.g., Goodall, 2014;Schuelke-Leech et al., 2019;Rodríguez-Alcázar et al., 2020). This research is able to provide practical, methodological and theoretical contributions to the development of resilient road transport systems that cater for automated and vulnerable road users. ...
Article
Full-text available
Future visions of transport systems include both a drive towards automated vehicles and the need for sustainable, active, modes of travel. The combination of these requirements needs careful consideration to ensure the integration of automated vehicles does not compromise vulnerable road users. Transport networks need to be resilient to automation integration, which requires foresight of possible challenges in their interaction with other road users. Focusing on a cyclist overtake scenario, the application of operator event sequence diagrams and a predictive systems failure method provide a novel way to analyse resilience. The approach offers the opportunity to review how automation can be positively integrated into road transportation to overcome the shortfalls of the current system by targeting organisational, procedural, equipment and training measures.
Article
Full-text available
The aim of the study is to present the current situation regarding the strategies and policies for the development of AI skills in public sector executives and to establish a holistic training framework based on European and international standards. The paper systematically presents the existing literature on AI, focusing on policy strategies for the training of public sector executives. Hence, the key points of the strategies for AI, as well as the UNESCO Competency Framework for Digital Transformation and AI, the e-CF, the DigComp and the EQF frameworks are presented. Based on the theoretical tools emerged from the literature review, an assessment of the existing situation and the identification of the needs of the Greek reality is presented. Most importantly, the paper attempts to create a holistic four-level strategic framework, which can be used by the public administration as a roadmap to lay the foundations for a basis for public sector training programmes, and which takes into account a number of factors, such as the hierarchical structure of the public administration, the various qualification and competence frameworks, as well as the principles of educational design and adult education.
Article
The abundance of literature on ethical concerns regarding artificial intelligence (AI) highlights the need to systematize, integrate, and categorize existing efforts through a systematic literature review. The article aims to investigate prevalent concerns, proposed solutions, and prominent ethical approaches within the field. Considering 309 articles from the beginning of the publications in this field up until December 2021, this systematic literature review clarifies what the ethical concerns regarding AI are, and it charts them into two groups: (i) ethical concerns that arise from the design of AI and (ii) ethical concerns that arise from human–AI interactions. The analysis of the obtained sample highlights the most recurrent ethical concerns. Finally, it exposes the main proposals of the literature to handle the ethical concerns according to the main ethical approaches. It interprets the findings to lay the foundations for future research on the ethics of AI.
Article
Full-text available
The technological advancements of Connected and Automated Vehicles (CAVs) are outpacing the current regulatory regime, potentially resulting in a disconnect between legislators, technology, and CAV stakeholders. Although many studies explore the regulatory requirements of operations of CAVs, studies on regulatory challenges specific to the cybersecurity of CAVs are also emerging and receiving lots of attention among researchers and practitioners. However, studies providing an up-to-date synthesis and analysis on CAVs regulatory requirements specific to cyber-risk reduction or mitigation are almost non-existent in the literature. This study aims to overcome this limitation by presenting a comprehensive overview of the role of key Intelligent Transportation Systems (ITS) stakeholders in CAV's cybersecurity. These stakeholders include road operators, service providers, automakers, consumers, repairers, and the general public. The outcome of this review is an in-depth synthesis of CAV-based ITS stakeholders by visualising their scope in developing a Cybersecurity Regulatory Framework (CRF). The study demonstrated the compliance requirements for ITS communication service providers, regulatory standards for CAVs automakers, policy readiness for CAVs customers and the general public who interact with CAVs, and the role of the CAVs Network Operator Centre in regulating CAVs data flow. Moreover, the study illuminates several critical pathways necessary in future for synthesizing and forecasting the legal landscape of CAV-based transportation systems to integrate the regulatory framework for CAV stakeholders. The paper's findings and conclusions would assist policymakers in developing a comprehensive CRF.
Article
Many governments find it challenging to set up a regulatory regime to govern rapidly developing Autonomous Vehicles (AVs) technologies empowered by Artificial Intelligence. This paper analyzes flexible regulation as a tool for assessing regulatory reforms that govern disruptive innovations such as AVs. After defining flexible regulation as regulation that gives regulated entities choices for how to comply with the regulatory objectives, this paper develops Regulatory Flexibility Indicators (RFI) for rule structure, enforcement structure, and regulatory feedback. We study AVs regulatory reforms that took place recently in the United Kingdom and South Korea, focusing on how such reforms have enhanced regulation flexibility. This paper finds that regulatory governance in the United Kingdom is more flexible than in South Korea, indicating aspects of further reforms for improving regulatory flexibility.
Conference Paper
Full-text available
The advancement of artificial intelligence has brought both opportunities and challenges to the business world, and its potentially disruptive impact has attracted the research interest of management scholars. This exploratory research applied a systematic literature review approach to explore the nexus between AI and competences to help both firms and individuals better address the disruptions from AI. After reviewing relevant publications from the Business Source Complete database for the past decade (2011-2021), we selected 65 articles published in leading international journals for analysis. We summarized researchers’ debates and issues on AI and perspectives linked with competences. Furthermore, we synthesize two frameworks (RBV framework for firm-level; Key and STEM competences for individual-level) and an overview to gain a holistic understanding of the nexus between AI and competences. We found relatively little empirical evidence in the literature, the implementation of AI was still in its preliminary stages, and the frameworks we aggregated were “contextual sensitive”. We suggest that future research could be conducted in a specific industry and yield richer insights.
Article
Full-text available
The onset of autonomous driving has provided fertile ground for discussions about ethics in recent years. These discussions are heavily documented in the scientific literature and have mainly revolved around extreme traffic situations depicted as moral dilemmas, i.e. situations in which the autonomous vehicle (AV) is required to make a difficult moral choice. Quite surprisingly, little is known about the ethical issues in focus by the AV industry. General claims have been made about the struggles of companies regarding the ethical issues of AVs but these lack proper substantiation. As private companies are highly influential on the development and acceptance of AV technologies, a meaningful debate about the ethics of AVs should take into account the ethical issues prioritised by industry. In order to assess the awareness and engagement of industry on the ethics of AVs, we inspected the narratives in the official business and technical reports of companies with an AV testing permit in California. The findings of our literature and industry review suggest that: (i) given the plethora of ethical issues addressed in the reports, autonomous driving companies seem to be aware of and engaged in the ethics of autonomous driving technology; (ii) scientific literature and industry reports prioritise safety and cybersecurity; (iii) scientific and industry communities agree that AVs will not eliminate the risk of accidents; (iv) scientific literature on AV technology ethics is dominated by discussions about the trolley problem; (v) moral dilemmas resembling trolley cases are not addressed in industry reports but there are nuanced allusions that unravel underlying concerns about these extreme traffic situations; (vi) autonomous driving companies have different approaches with respect to the authority of remote operators; and (vii) companies seem invested in a lowest liability risk design strategy relying on rules and regulations, expedite investigations, and crash/collision avoidance algorithms.
Book
Full-text available
A new approach to safety, based on systems thinking, that is more effective, less costly, and easier to use than current techniques. Engineering has experienced a technological revolution, but the basic engineering techniques applied in safety and reliability engineering, created in a simpler, analog world, have changed very little over the years. In this groundbreaking book, Nancy Leveson proposes a new approach to safety—more suited to today's complex, sociotechnical, software-intensive world—based on modern systems thinking and systems theory. Revisiting and updating ideas pioneered by 1950s aerospace engineers in their System Safety concept, and testing her new model extensively on real-world examples, Leveson has created a new approach to safety that is more effective, less expensive, and easier to use than current techniques. Arguing that traditional models of causality are inadequate, Leveson presents a new, extended model of causation (Systems-Theoretic Accident Model and Processes, or STAMP), then shows how the new model can be used to create techniques for system safety engineering, including accident analysis, hazard analysis, system design, safety in operations, and management of safety-critical systems. She applies the new techniques to real-world events including the friendly-fire loss of a U.S. Blackhawk helicopter in the first Gulf War; the Vioxx recall; the U.S. Navy SUBSAFE program; and the bacterial contamination of a public water supply in a Canadian town. Leveson's approach is relevant even beyond safety engineering, offering techniques for “reengineering” any large sociotechnical system to improve safety and manage risk.
Technical Report
Full-text available
The “100-Car Naturalistic Driving Study” is a three-phased effort designed to accomplish three objectives: Phase I, Conduct Test Planning Activities; Phase II, Conduct a Field Test; and Phase III, Prepare for Large-Scale Field Data Collection Effort. This report documents the efforts of Phase II. Project sponsors are the National Highway Traffic Safety Administration (NHTSA) and the Virginia Department of Transportation (VDOT). The 100-Car Naturalistic Driving Study is the first instrumented-vehicle study undertaken with the primary purpose of collecting large-scale, naturalistic driving data. Drivers were given no special instructions, no experimenter was present, and the data collection instrumentation was unobtrusive. In addition, 78 of 100 vehicles were privately owned. The resulting database contains many extreme cases of driving behavior and performance, including severe drowsiness, impairment, judgment error, risk taking, willingness to engage in secondary tasks, aggressive driving, and traffic violations. The data set includes approximately 2,000,000 vehicle miles, almost 43,000 hours of data, 241 primary and secondary drivers, 12 to 13 months of data collection for each vehicle, and data from a highly capable instrumentation system including 5 channels of video and many vehicle state and kinematic sensors. From the data, an “event” database was created, similar in classification structure to an epidemiological crash database, but with video and electronic driver and vehicle performance data. The events are crashes, near-crashes, and other “incidents.” Data is classified by pre-event maneuver, precipitating factor, event type, contributing factors, associative factors, and the avoidance maneuver. Parameters such as vehicle speed, vehicle headway, time-to-collision, and driver reaction time are also recorded. The current project specified ten objectives or goals that would be addressed through the initial analysis of the event database. This report addresses the first 9 of these goals, which include analyses of rear-end events, lane change events, the role of inattention, and the relationship between levels of severity. Goal 10 is a separate report and addresses the implications for a larger-scale data collection effort.
Article
Full-text available
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
Article
Full-text available
There are numerous emerging technologies that are potentially disruptive to business, to government, and to society. This paper looks at the two levels of disruptive technologies. A first order disruption is a localized change, within a market or industry. A second order disruption has much larger influences, affecting many industries and substantially changing societal norms and institutions. This paper presents an analysis of these types of disruptions and synthesizes the literature on disruptive technologies and Kondratieff's long waves. Previous definitions of disruptive technologies have focused on disruptions to commercial markets and existing firms. That is, on the first order of disruption, where a particular technology is disruptive. The second order disruptions are technological disruptions, where the impacts ripple through society. That is, these technologies disrupt social interactions and relationships, organizational structures, institutions and public policies. Since innovation ecosystems are complex, dynamic, adaptive systems, technological design and development necessarily interact with social trends. Understanding and modelling potential disruptions will require taking a holistic perspective of the socio-technical innovation ecosystem. This paper lays out a conceptual model of first and second order disruptions and identifies the key factors necessary for modelling the process that results in second order disruptions-a process that necessarily occurs at the intersection of several highly-complex interactive systems.
Chapter
Innovation and Entrepreneurship are complex activities. They are also primarily language and relationship based. That is, it is largely through verbal communications (speech and text) that ideas are developed and business transacted. New methods are arising which are changing the way that we understand and can investigate innovation and entrepreneurship. Big Data Analytics allow researchers to uncover relationships and meaning in text documents, using a mix of quantitative and qualitative methods. This chapter shows that the complexity issues in innovation and entrepreneurship research with text comes from three sources. The first form of complexity is technical complexity. The second source of complexity is from language itself. The third source of complexity is in the concept itself. Each of these is discussed in detail. Complexity can either be addressed by simplifying the data or finding a mechanism for dealing with the complexity. A method of text data analytics using Corpus and Computational Linguistics deals with the complexity without eliminating data, allowing for a more nuanced investigation of innovation and entrepreneurship. The methodology is demonstrated by investigating how technological innovation and entrepreneurship are discussed in the United States Congress, using a corpus from 1981 to 2014.
Article
Future cars will be able to execute the longitudinal and lateral control and other subtasks of driving. Automation effects, known in other domains like aviation, rail traffic or manufacturing, will emerge in road transportation with consequences hard to predict from the present point of view. This paper discusses the current state of automation research in road traffic, concerning the take-over at system limits. Measurements like the take-over time and the maximum accelerations are suggested and substantiated with data from different experiments and literature. Furthermore, the procedure of such take-over situations is defined in a generic way. Based on studies and experience, advice is given concerning methods and lessons learned in designing and conducting take-over studies in driving simulation. This includes the test and scenario design and which dependent variables to use as metrics. Detailed information is given on how to generate proper control conditions by driving manually without automation. Core themes like how to keep situation presentation constant even for manual drivers and which measures to use to compare a take-over to manual driving are addressed. Finally, a prospect is given on further needs for research and limitations of current known studies.