PreprintPDF Available

Autonowashing: The Greenwashing of Vehicle Automation

Authors:

Abstract and Figures

The presence of automation is growing in the domains of our homes, workplaces and roadways. Vehicle automation in particular is raising critical human factors issues which directly impact human-machine interaction and road safety. It is especially important that users of partial and semi-autonomous systems in safety-critical contexts understand the limitations of the technology, in order to ensure appropriate reliance. Studies indicate that the language used to describe vehicle automation in marketing and in the media effect user perceptions of the system's capabilities, and later their interaction with the system. Much like "greenwashing", the capabilities of automation are often overstated. The lack of public awareness of this issue is one of the most critical problems impacting trust calibration and the safe use of vehicle automation. Yet, it has gone unnamed and continues to affect the public understanding of the technology. Hence, the case for the use of the term "autonowashing" to describe the gap in the presentation of automation and the actual system capabilities is put forth. This paper presents case studies and discusses key issues in autonowashing, a term/concept that influences or relates to public perceptions of vehicle automation.
Content may be subject to copyright.
Autonowashing: The Greenwashing of Vehicle Automation
Liza Dixon
Kamp-Lintfort, Germany
email: lizadixon@gmail.com
AbstractThe presence of automation is growing in the
domains of our homes, workplaces and roadways. Vehicle
automation in particular is raising critical human factors issues
which directly impact human-machine interaction and road
safety. It is especially important that users of partial and semi-
autonomous systems in safety-critical contexts understand the
limitations of the technology, in order to ensure appropriate
reliance. Studies indicate that the language used to describe
vehicle automation in marketing and in the media effect user
perceptions of the system’s capabilities, and later their
interaction with the system. Much like “greenwashing”, the
capabilities of automation are often overstated. The lack of
public awareness of this issue is one of the most critical problems
impacting trust calibration and the safe use of vehicle
automation. Yet, it has gone unnamed and continues to affect
the public understanding of the technology. Hence, the case for
the use of the term “autonowashing” to describe the gap in the
presentation of automation and the actual system capabilities is
put forth. This paper presents case studies and discusses key
issues in autonowashing, a term/concept that influences or
relates to public perceptions of vehicle automation.
KeywordsAutonowashing; Trust; Automation; Human-
Machine Interaction; Autonomous Vehicles.
I. INTRODUCTION
The promises of automated technologies from robotic
assistants to self-driving cars to improve our safety and
quality of life are seemingly boundless. Fantasies of the
future, in which road deaths are virtually eradicated and
mundane tasks are removed from human concern, freeing us
to enjoy greater independence and personal autonomy, are not
guaranteed. To make these visions a reality, automated
systems must not only be functional, reliable, and trustworthy;
they must be mindfully introduced to the humans they intend
to support.
However, terms like “automation, intelligent systems”,
and artificial intelligence are used loosely to describe
everything from a smartphone’s autocorrect feature to a
safety-critical application such as an Advanced Driver
Assistance System (ADAS). As marketing buzzwords, their
meaning is becoming increasingly diluted. The
misinterpretation of AI and automation in the media is
confusing the public and is considered to be an economic and
humanitarian issue [1]. Economic, because the corporations
investing billions of dollars into the development of
automated systems [2] are counting on the return on
investment, and humanitarian because the misuse of
automated systems can be deadly [3]. Yet, in the case of
vehicle automation, a great irony exists in the way that the
technology is being promoted.
Unfortunately, like the greenwashing of the sustainability
movement [4], the capabilities of vehicle automation are
commonly inflated. A corporationeager to profit in the
short-termmight exaggerate the capabilities of a product’s
automation in marketing verbiage. This is a substantial risk,
as users’ ideas about automated systems are formulated long
before their first contact with a system, and these ideas
influence how they later interact with the system [5]. A user
that believes a system is more capable i.e. more autonomous
than it really is, is more likely to overtrust and misuse the
system [6], increasing their risk for an accident. News reports
of human fatality and severe injury in association with
partially automated driving systems condensate a dark cloud
over the technology, increasing customer wariness about its
reliability. In the long-term, this may hinder acceptance and
“…can reduce or even nullify the economic or other benefits
that automation can provide” [7].
Therefore, using transparent language in the marketing
and promotion vehicle automation which clearly describes its
abilities and the limitations is critical to safe use, acceptance,
and scaled adoption of vehicle automation [8].
In this paper, the concept/term of autonowashing is
proposed to describe the gap between the way automation
capabilities are described to users and the system’s actual
technical capabilities. The key concepts of greenwashing, the
current state of vehicle autonomy, and trust in automation are
presented. In order to better understand what autonowashing
looks like in practice, media headlines and automakers’
approach marketing vehicle automation are examined as case
studies. The five signs of autonowashing, the consequences of
autonowashing and what might be done to alleviate its effects
are also presented. The following questions are addressed:
Q1: What is autonowashing and why does it occur?
Q2: When, where, and how does autonowashing occur?
Q3: What are the effects of autonowashing and how might
they be mitigated?
A. Greenwashing
Over the past few decades, the need for more sustainable
lifestyle choices and business practices has been brought into
public awareness. With this awareness, so came the
opportunity for businesses to adopt a “socially responsible”
image, aligned with the concerns of their customers, hence,
corporate social responsibility (CSR) was born. CSR is “the
idea that a company should be interested in willing to help
society and the environment as well as be concerned about the
products and profits it makes” [9]. CSR has proven to be so
effective, that claims of social responsibility have become
increasingly stretched and exaggerated. Corporationsfreely
using green” terms and labels as neededhave elegantly
mislead customers into thinking they were purchasing more
environmentally friendly goods than they actually were;
essentially, preying upon the uniformed. This practice of
exaggerating the “naturalness” or “eco-friendliness” of
products and services grew so widespread, was given a name:
greenwashing.
Greenwashing was coined by environmentalist Jay
Westerveld in 1986 [10] and is defined as the practice of
making an unsubstantiated or misleading claim about the
environmental benefits of a product, service, technology or
company practice [4]. Greenwashing illuminates the
disconnect between a marketed image of corporate social
responsibility (CSR) and the reality of a corporation, product
or service’s contribution to the sustainability movement. Over
the years, greenwashing has become more sophisticated;
spawning the development of frameworks for identifying
greenwashing in practice (see the Six Sins of Greenwashing
[11]).
B. State of Commercial Vehicle Automation & Marketing
Presently, the highest level of vehicle automation in
production vehicles on-road today is Level 2 automation [12].
The levels of vehicle automation, as defined by the Society of
Automotive Engineers (SAE), extend from Level 0 or “no
automation” to Level 5 or “full automation” [13]. Level 1
automation describes traditional cruise control systems (speed
maintenance) and Level 2 describes “partial automation” or
Adaptive Cruise Control Systems (ACC) in which the vehicle
is able to maintain speed, accelerate, decelerate and in some
cases light steering for lane maintenance. However, Level 2
systems require full driver supervision at all times. Level 3
systems are described as “semi-autonomous” or “conditional
automation”; the driver is not required to supervise the system
in specific scenarios (e.g. in a traffic jam) but may be called
upon by the system to take back control of the vehicle should
conditions change. Level 4 and Level 5 automation (“full
automation”, “self-driving”, “autonomous”) are similar, with
the exception that Level 4 automation is geofenced i.e. limited
to specific geographic areas, while Level 5 automation is
theoretically fully autonomous, requiring no human
supervision or presence and can operate in all conditions.
According to a recent study, “automated driving hype is
dangerously confusing customers”, and further, Some
carmakers are designing and marketing vehicles in such a way
that drivers believe they can relinquish control[14]. Because
there is no regulating body overseeing the language used to
describe assistive systems, automakers, also known as original
equipment manufacturers (OEMs) have been unchecked in
their use of branded terms.
OEMs offering driver assistance systems as options in
their vehicles (i.e. Audi, Ford, Tesla), use a wide vocabulary
to describe these options and their abilities. The motivations
for this are clear: “Carmakers want to gain competitive edge
by referring to ‘self-driving’ or ‘semi-autonomous’ capability
in their marketing…” [14]. As a result, a recent survey of
1,567 car owners across seven different countries found that
71% of those surveyed believed it was possible to purchase a
“self-driving car” today [14].
Issues surrounding the language used to describe vehicle
automation are noted in scientific literature [8] and in the
media [15]. A study by Beller, Heesen, & Vollrath [16]
confirms that a user with a false understanding of system
infallibility, “can lead to severe consequences in the case of
automation failure.
C. Trust in Automation
When automation (autopilot) was first introduced to the
aviation industry, it helped pilots evade many common
accident scenarios; simultaneously, new kinds of accidents
emerged. It was not until pilot education about the automation
and "the concept of the human-automation team” was
introduced, that the benefits of the autopilot system were fully
realized, supporting the aviation industry with the extremely
low accident rates of today [17]. In truth, one cannot
remove human error from the system simply by removing the
human operator [7].
A key component in the acceptance of automation is an
attitude of trust in the system [6]. Trust in automation (in the
context of automated driving systems) is defined by Körber
as, “The attitude of a user to be willing to be vulnerable to the
actions of an automated system based on the expectation that
it will perform a particular action important to the user,
irrespective of the ability to monitor or to intervene” [18].
Trust is not only critical for acceptance, but also for safety.
Both an overtrust in automation as well as a distrust in
automation create problems in human-automation interaction.
The goal then, is calibrated trust [19], or a level of user trust
in the system which matches the automation capabilities of the
system in use [6] (see Figure 1).
Trust in partially automated vehicles begins “long before
a driver’s first experience with the system, and continues long
thereafter” [5]. Multiple studies have confirmed that the
branded terms (e.g. Tesla’s “Autopilot”, Audi’s “Traffic Jam
Pilot”) used to describe vehicle technology influence
perceptions of the technology and “name alone is not enough
to appropriately orient drivers to system limitations” [8].
Stories about automation in the media, in advertising, and
those heard by word of mouth have an effect on trust [5], [8].
Further, unrealistic expectations of assistive and self-driving
technologies could be a barrier to acceptance [20]. Initial
acceptance may be increased if the driver’s expectations of the
system are unrealistically high. However, after practical
experience with the automation which reveals its
shortcomings, “trust and acceptance may be irreparably
harmed” [20]. Therefore, supporting drivers with realistic
expectations regarding the capabilities of automation is
important for acceptance in the long-term [21].
II. AUTONOWASHING
Adapted for automation [4], autonowashing is defined as
the practice of making unverified or misleading claims which
misrepresent the appropriate level of human supervision
required by a partially or semi-autonomous product, service,
or technology. Autonowashing may also be extended to fully
autonomous systems, in cases where system capabilities are
exaggerated beyond what can be performed reliably.
Autonowashing makes something appear to be more
autonomous than it really is.
The objective of autonowashing is to differentiate and/or
offer a competitive advantage to an entity, through the use of
superficial verbiage meant to convey a level of system
competence that is misaligned with the technical
specifications of the system. Autonowashing may also occur
inadvertently, when one unknowingly repeats erroneous
information about the capabilities of an automated system to
another. Autonowashing is a form of disinformation; it is, in a
sense, viral.
A. Effects of Autonowashing
The results of autonowashing are, but not limited to:
misuse of a system due to inappropriate reliance, leading to
disuse of a system due to performance concerns.
Autonowashing does not support calibrated trust in
automation and increases the likelihood that a user will
overtrust a system (see Figure 1) [6].
Those who have been autonowashed believe that an
automated system is more capable than it really is, and hence
may be confused about how much supervision the system
requires from a driver. They may refer to an ADAS as “self-
driving” or “autonomous” and be more inclined to engage in
risky misuse, such as removing their hands from the wheel or
looking away from the road ahead, increasing the risk of
accident.
Figure 1. Illustration of the relationship between trust,
automation capability, overtrust, distrust, calibrated trust
and autonowashing [adapted from 6]. Autonowashing
affects trust, resulting in a tendency to overtrust, increasing
the risk of system misuse. A recoverable or “safe” margin of
error (light gray) in trust calibration is to be expected in
use. However, autonowashing may push the user beyond
this margin, into a situation where an accident is more
likely.
Although the term is not directly mentioned, the concept
of autonowashing has been receiving media attention [12],
[15], [22][24] is discussed in numerous studies [8], [17], [20]
and the subject of multiple lawsuits [25][27].
B. Case Study: Headlines
Over the past decade terms such as “autonomous”,
“driverless”, and self-driving” have made increasing
appearances in media headlines. These buzzwords are often
used by outlets and publications to describe all levels of
vehicle automation, baiting interest, sales and “driving traffic
to their respective sites. It is not uncommon to come across an
article discussing Level 2 automation as “autonomous or a
testing vehicle as “driverless”, even though there is a human
safety driver monitoring the vehicle and the environment at all
times (see Table 1).
TABLE I. AUTONOWASHED HEADLINES & CONTRADICTIONS
Headline
Contradiction
“Joshua Brown, Who Died in Self-
Driving Accident, Tested Limits
of his Tesla” The New York
Times [28]
Not “self-driving”—this Tesla had
Autopilot (Level 2, ADAS) which
requires full driver supervison
[29].
“Volvo puts 100 British families in
driverless cars” Financial Times
Not “driverless”—all vehicles
have professional safety drivers,
monitoring the the driving
environment, ready to intervene
[30].
Tesla Has Begun Making All Its
New Cars Self-Driving NPR
[31]
Not “self-driving”—Tesla
reportedly upgraded the hardware
systems in their vehicles, which
(according to Tesla) could one day
make the vehicles “fully self-
driving” [32].
Elon Musk Defends Tesla
Following Latest Self-Driving
Accident” – Adweek [33]
Not “self-driving”—Tesla has
Autopilot (Level 2, ADAS) which
requires full driver supervison
[29].
Fully driverless cars are on
public roads in Texas: [subhead]
Drive.ai is the second company to
remove the safety driver from its
autonomous vehicles The Verge
Not “fully driverless”—modified
vehicle with professional safety
driver in passenger seat,
monitoring the driving
environment, ready to intervene
[34].
“Shocking moment a Tesla driver
is filmed ASLEEP behind the
wheel as his self-driving car
travels at high speeds on the
California interstate”
Not a “self-driving” vehicle—this
Tesla has Autopilot (Level 2,
ADAS) which is being
“successfully” (without incident)
misused as it requires full driver
supervison [35].
Table 1. Examples of autonowashed headlines; verbiage
which gives the reader the impression that the vehicle
being described is more autonomous than it really is. The
technical contradiction to the headline is presented.
C. Case Study: Tesla Autopilot & Full Self-Driving
In 2014, Tesla introduced its first iteration of a Level 2,
Advanced Driver Assistance System called Autopilot [36].
The term autopilot as defined by the Cambridge English
Dictionary [37] is, “a device that keeps aircraft, spacecraft,
and ships moving in a particular direction without human
involvement. The choice to name an ADAS “Autopilot,
which requires constant human supervision has been criticized
by experts, several organizations and government entities. In
a letter to Tesla, the German government (Federal Motor
Transport Authority) wrote: “In order to prevent
misunderstanding and incorrect customers’ expectations, we
demand that the misleading term Autopilot is no longer used
in advertising the system.” [22].
Since its initial release, Autopilot has received hardware
updates in newer vehicle models and continues to receive
software updates over the air [29]. These updates have
improved the reliability of the system under certain conditions
but have not yet made the system “more autonomous”—
Autopilot remains a Level 2 system as the same level of driver
attention is required to operate the system safely. It is
explicitly stated on the Tesla website and in the vehicle
owner’s manual in multiple instances that the driver must keep
their hands on the wheel and their attention on the road ahead
[29], [36]. Despite these statements, Tesla is the only OEM
currently marketing Level 2, ADAS equipped vehicles as
“self-driving” [38].
In October 2016, Tesla announced that “all Tesla vehicles
produced in our factorywill have the hardware needed for
full self-driving capability at a safety level substantially
greater than that of a human driver[32] (see Figure 2). This
announcement also came with the sale of a new Autopilot
option called “Full Self-Driving Capability” (FSD). Tesla
stated that customers who purchased the FSD upgrade would
not experience any new features initially but that in the future,
this upgrade would enable the vehicle to be “fully self-
driving” [39]. This option was later removed, but then
subsequently reintroduced for sale in February of 2019.
Figure 2. Tesla.com homepage promoting “Full Self-Driving
capability” upgrade after its first release in October 2016
[40].
Along with FSD, Tesla released a series of promotional
videos [41], showing one of its vehicles navigating through
intersections, stop signs, highway on ramps, off ramps, etc.
without any human intervention [32]. The steering wheel is
seen moving independently and the driver’s hands are seen
resting in their lap, left off of the steering wheel. The European
New Car Assessment Program responded to these videos in a
2018 report, stating that Tesla had released videos which are
“confusing consumers about the actual capabilities of the
Autopilot system” [42].
Tesla’s CEO Elon Musk has promoted “Full Self-Driving
Capability on his personal Twitter account, in one case
stating “Tesla drives itself (no human input at all) thru urban
streets to highway to streets, then finds a parking spot
without clarifying that this feature is not yet enabled [43].
Further, Musk has been seen in multiple TV interviews [23],
[44], [45], removing his hands from the wheel with Autopilot
active. In one of these examples, he did so and stated, “See?
It’s on full Autopilot right now. No hands, no feet, nothing,”
as he demonstrates the system to the interviewer, who is
sitting in the passenger seat (Figure 3) [45]. This behavior is
at odds with appropriate use, and is explicitly warned against
in the Tesla Owner’s Manual [29].
Figure 3. Elon Musk removing his hands from the wheel
with Autopilot engaged during an interview [45].
a) Legal & Regulatory Troubles
While marketing Full Self-Driving in 2017, a class action
lawsuit was filed against Tesla, alleging that Tesla had
“mislead customers” about its “Enhanced Autopilot” option,
having stated that it would “improve safety and reduce the
possibility of collisions.Upon release after significant delay,
it was found to be “essentially unusable and demonstrably
dangerous” [46]. The initial complaint stated: “Contrary to
what Tesla represented to them, buyers of affected vehicles
have become beta testers of half-baked software that renders
Tesla vehicles dangerous if engaged”. Tesla later settled [47].
National Transportation Safety Board (NTSB)
investigations have found that Tesla Autopilot was active
during multiple fatal accidents [12]. Tesla is currently in
litigation with the families of multiple individuals who have
died while using Autopilot; they claim that the system is
“defective” [26], [27] and that Tesla “had specific knowledge
of numerous prior incidents and accidents in which its safety
systems on Tesla vehicles completely failed causing
significant property damage, severe injury, and catastrophic
death to its occupants” [26]. A recent lawsuit stated that one
of Autopilot’s late users, believed his Model 3 was safer than
a human-operated vehicle because Defendant, Tesla claimed
superiority regarding the vehicle’s Autopilot system,
including Tesla’s 'full self-driving capability’…”. The driver
believed that the system would "prevent fatal injury resulting
from driving into obstacles and/or vehicles in the path of the
subject Tesla vehicle” [48].
In a 2018 accident in which a driver using Tesla Autopilot
struck the back of a fire truck, the NTSB concluded the
accident was: “…due to his inattention and overreliance on the
car’s advanced driver assistance system, Tesla’s Autopilot
design which permitted the driver to disengage from the
driving task, and the driver’s use of the system in ways
inconsistent with guidance and warnings from Tesla[49]. In
another crash involving Autopilot which injured a driver, a
legal complaint stated that the driver was told by a Tesla
salesperson that, “she could drive in autopilot mode and just
touch the steering wheel occasionally” and “that touching the
steering wheel to maintain autopilot mode was demonstrated
to [her]” [25] which is misaligned with the Tesla manual
guidelines for safe use [29].
D. Case Study: Mercedes-Benz E-Class
In 2016, Mercedes-Benz launched a new advertising
campaign called “The Future” in order to promote the new
automated features launching in its E-Class sedan. The
campaign stated:
“Is the world truly ready for a vehicle
that can drive itself? An autonomous-
thinking automobile that protects those
inside and outside. Ready or not, the
future is here. The all new E-Class:
self-braking, self-correcting, self-
parking. A Mercedes-Benz concept
that’s already a reality.” [50]
The headline of one of the ads read, “Introducing a self-
driving car from a very self-driven company. [51]. Followed
by a paragraph describing the vehicles new convenience and
safety systems (Figure 4).
Figure 4. Copy in a retracted 2016 print advertisement
promoting Mercedes-Benz’s 2017 E-Class [51].
Coincidentally, this campaign launched following a fatal
accident in which Tesla’s Autopilot was in use. This prompted
further scrutiny from consumer groups, who alerted the
Federal Trade Commission (FTC) to the campaign, over
concern that the vehicle’s assistive features being presented as
“self-driving” were deceptive and misleading customers.
However, Mercedes-Benz defended itself against allegations
that consumers were being misled about the car’s self-driving
capabilities [50]. The campaign was later pulled by
Mercedes [51].
E. Mitigating Autonowashing
How then, might we ease the effects of autonowashing?
What steps might be taken to support autonowashed drivers in
calibrating their trust and ensuring their safe use of vehicle
automation? Specifically, partial (Level 2) and conditional
automation (Level 3)?
1) Identification
The first step in the mitigation of autonowashing is being
able to identify what it looks like in the real world. Like The
Six Sins of Greenwashing [11], autonowashing can take
several forms. The signs of autonowashing can be grouped
into five categories: vague language, no proof/fibbing, false
idols, autonoporn, and the hidden trade-off (see Table 2).
TABLE II. FIVE SIGNS OF AUTONOWASHING
Sign
Description
Example
Vague
language
A term or claim that is poorly
defined or broad that its real
meaning is likely to be
misunderstood by the user.
“Autopilot”
No proof/
Fibbing
Deceitfully making a claim
about a system’s capabilities;
Claiming to have autonomous
capabilites which have not been
verified by a third party.
“Full-Self Driving
capability”
False idols
When inappropriate
reliance/system misuse is
modeled to an audience by a
figure of influence or authority.
Tesla CEO’s
interactions with
Autopilot during
multiple televised
interviews [23],
[44], [45] (see
Figure 3).
Autonoporn
Media of demonstrations which
feature idealized functionality
of automated systems operating
successfully with little to no
human supervision or
interaction. Typically, a vertical
prototype not meant to be
generalized.
Video footage of
steering wheels
moving
independently,
photos of users
reading books or
watching a movie
in the driver’s seat
[41].
Hidden
trade-off
Focusing attention on one
particular attribute, concealing
some crucial information,
especially from the human
supervision point of view.
“Driver assistance
systems make
driving safer”
[…provided they
are appropriately
supervised].
Adapted for autonowashing from [11]
Table 2. The Five Signs of Autonowashing, their description,
and an example of their application in the real world.
The presence of a sign of autonowashing does not
necessarily mean that autonowashing is taking place. For
example, a company may run an ad of autonopornographic
footage of technical projects in their pipeline, however it is
explicitly stated that these systems or features are futuristic
and are not available for purchase.
2) Federal Trade Commission Act, Section 5
Under Section 5 of the Federal Trade Commission Act
[52], consumers are protected against “unfair or deceptive acts
or practices” by corporations. Autonowashing may be in in
violation this act as, 1) autonowashing is “unfair” and “causes
or is likely to cause substantial injury to consumers”, 2)
autonowashing is “deceptive” because it “misleads or is likely
to mislead the consumer”. Further, Section 5 also states that
fine print may not be used to “correct potentially misleading
headlines” [52], a marketing tactic which is often used in
instances of autonowashing in order to “protect” the company
behind the promotion. Consumer advocate groups are calling
for the FTC to investigate instances of this [38]. Enforcement
of Section 5 in relation to severe instances of autonowashing
not only protects consumers (drivers) and roadways but could
also support the industry in the safe deployment of vehicle
autonomy.
3) Terminology
Autonowashing often begins when vague language is used
to describe automation. A review by the American
Automobile Association (AAA) found that the National
Highway Traffic Safety Administration (NHTSA), Insurance
Institute for Highway Safety (IIHS), Society of Automotive
Engineers (SAE) and other regulatory and research
organizations have all used different technology names to
describe systems with the same underlying technology[24].
The SAE levels [13], designed by and for engineers, are
internationally accepted as the standard for vehicle autonomy.
However, the SAE levels are often misinterpreted by the
media and consumers. There is currently no standardized
vocabulary for vehicle automation that is designed to be
consumer-facing; a critically missing piece for supporting
drivers appropriate reliance [15].
Therefore, the AAA has put forth a recommendation of
proposed terminology to describe vehicle automation features.
AAA encourages automakers “to include the common naming
for advanced safety systems on the window sticker, owner’s
manual, and other collateral materials so consumers can more
clearly understand what technology is present [in] the
vehicle.” They also acknowledge that automakers may wish
to continue to use their branded terms for assistive systems, in
tandem with their suggested terms [24]. Industry experts have
suggested that multiple levels is not efficient for public use
and that the classification of vehicle automation should
instead be divided into two categories:
Geotonomous/Geotonomy (full automation limited only by
geography) and Human-Assisted System (HAS) (all systems
which require human supervision, e.g. ADAS) [15].
4) Driver Education
Public awareness self-driving technologies and driver
assistance systems are an essential part of shaping safe
interactions. There are organizations such as the Partners for
Automated Vehicle Education (PAVE), a collective of
OEMS, self-driving companies, academics, and government
organizations whose aim is to “inform and educate the public
and policymakers on the facts regarding automated vehicles”
[53]. Let’s Talk Self-Driving is another group, led by self-
driving company Waymo and partnered with groups such as
AAA, the National Safety Council, and other national and
community organizations supporting safer roadways. They
aim to educate the public about how self-driving technology
works, how it can improve safety and how it might change our
roadways in the future [54]. Mass media campaigns are
suggested as a means to educate society and promote a
“baseline level of understanding” of automation and
intelligent systems, supporting the calibration of trust via the
understanding of system limitations [55].
The point of sale is another vital touchpoint for supporting
calibrated trust in automation. In a survey, 52% of drivers
responded that they would prefer to be educated about their
new car at the dealership [56]. Yet, “…new vehicle customers
are not offered additional training in relation to the use of
automated vehicle subsystems at point of sale [57]. While
many salespersons may be willing to assist customers with
this training, it is generally the responsibility of the customer
to recognize the need for it and request it. Additionally, the
quality of the training the customer receives may vary greatly
[58]. Furthermore, vehicles can now be purchased online and
delivered to home address without any opportunity for in-
person, expert training. It is also possible that a driver might
be given a rental vehicle equipped with an ADAS and is 1) not
aware the vehicle has such a system 2) has had no training on
how the system works, safe conditions for use or what the
system limitations are.
Casner & Hutchins have presented a preliminary set of
standards for driver education prior to the use of partially
automated vehicles [17]. In the future, these standards might
be adapted for showroom and rental salespeople, as well as
vehicle delivery professionals in their training, as these
individuals are a critical point of contact for first-time users.
5) Driver Monitoring Systems
A foundational heuristic of the ergonomics of human-
system interaction as defined by ISO 9241:110 is error
tolerance. A dialogue (“interaction between a user and an
interactive system”) is error tolerant if it supports the user in
achieving their intended results with the system, despite errors
in input. It is achieved by means of error control, correction
and management [59]. In the context of vehicle automation, a
key part of designing for error tolerance is closing the human-
automation feedback loop by means of a driver monitoring
system (DMS).
A DMS monitors driver condition by various means to
detect drowsiness or lack of attention[24] and is especially
useful in the context of Level 2/3 automation [60]. There are
three types of DMSs used in vehicles with partial automation
on-road today: 1) head & eye tracking (e.g. Cadillac
SuperCruise [61]), 2) steering wheel capacitive sensors (hand
detection, e.g. BMW Traffic Jam Assist [62]), 3) steering
wheel torque sensors (assistive steering resistance detection,
e.g. Tesla Autopilot [29]). However, the robustness and cost
of these driver monitoring systems varies. Capacitive sensors,
head and eye trackers or some combination of sensors is the
most reliable solution, while torque sensors are the least
effective (especially on straight, flat roads where drivers are
more likely to become inattentive) [62]. A reliable DMS
assists the driver in proper system use by curbing misuse and
supporting trust calibration. Autonowashing leads to
overtrust, which then leads to misuse [6]; therefore, a DMS is
the final line of defense in curbing misuse and preventing
accident, injury and/or death due to autonowashing (see
Figure 5).
III. DISCUSSION
Acts of greenwashing have effectively stalled meaningful
progress towards the development of more sustainable
societies. Ironic, of course, as this is the goal which it appears
to represent. The development of the term greenwashing
captured the essence of the problem that is corporations
masquerading their products and services as “eco-friendly”.
Greenwashing has been functional in building consensus and
has helped to raise public awareness of the misleading tactics
being used by companies against consumers. The awareness
of greenwashing as an issue has inspired individuals to inform
themselves and begin to ask the questions necessary to close
the gap toward the truth.
As autonomy evolves and its presence in our everyday
lives grows, so do the human factors challenges it puts forth.
Inconsistencies between users expectation of a system’s
capabilities, and the system’s technical capabilities create
challenges in the calibration of user trust. This gapbetween
the system functionality and the user’s expectationformed
by misleading information in the media and marketing
promotions is the result of autonowashing
The conversation about automation will continue to be bi-
levelexperts require a vocabulary that allows them to
address technical challenges, while users simultaneously
require their own vocabulary, able to support even novices in
appropriate, safe interactions with automation. Yet, neither of
these conversations happens in a vacuum; hence, the fuzzy
semantics of automation are of no surprise. Because research
shows that trust calibration begins before users’ first contact
with vehicle automation, and is influenced by the media &
advertisements, a standardized terminology is an essential first
step in alleviating autonowashing.
The current state of vehicle autonomy on-road (Level 2)
requires the full attention and monitoring by a human driver
at all times. Although Tesla is not the only automaker guilty
of autonowashing, Tesla is the only OEM currently marketing
Level 2 vehicles as “self-driving” [38]; for this reason, Tesla
is predominately featured as a case study in this paper.
Furthermore, the name of Tesla’s ADAS, Autopilot, implies
an unspecified level of human inattention, which studies have
shown leads to confusion about the amount of human
supervision required to safely operate the system (vague
`language, Table 2) [12].
A system that knows the state of its operator is better able
to support them in meeting their goals. Driver state monitoring
is a vital part of the human-automation team, completing the
feedback loop necessary for safe use. Autopilot is not
supported by robust a DMS [62], as Tesla employs neither
capacitive hand detection nor head and eye tracking sensors in
their vehicles; either of which are key supporting/backup
systems for ensuring appropriate reliance and road safety. The
failure to implement existing technologies to build an error
tolerant system is a lapse in systems design principles [59] and
a failure of corporate social responsibility.
Further, false idols (see Table 2) which model
inappropriate reliance support the improper use of the system.
As for the on-camera behavior of Tesla’s CEO, Elon Musk,
subject matter experts have called this out as abusive and at
risk for “widening the gap between what the car seems to do
and what it actually does” [63]. Musk’s behavior and words
are acts of autonowashing, when he makes Autopilot appear
to be more autonomous than it really is. Autonowashed
portrayals of vehicle automation such as this encourage
unrealistic expectations of the technology, which are
counterproductive to its acceptance and adoption.
OEMs must take responsibility for their role in calibrating
trust in vehicle automation. Trust cannot be an afterthought,
left for human factors professionals to tackle in summative
evaluations. It must be positioned by OEMs as a core principle
in the functional design of systems, their testing and
evaluation. The systems must further be deployed with
thoughtful, trustworthy marketing promotions, taking into
account the limitations of the system. A worthy cause, as
companies which prioritize trustworthy marketing in long-
term secure reduced customer acquisition costs, higher profit
margins, growth and a competitive advantage [64].
Organizations such as the PAVE Campaign and Let’s Talk
Self-Driving are needed to raise public awareness around the
current state of vehicle automation. These groups will be
important touchpoints between the industry and the consumer,
and can help to prime users via advertisements and
educational materials about where vehicle autonomy is today,
where it is going next, and the goals and benefits of such
technology.
Further, Casner & Hutchins’ [17] standards may be used
as a foundation to build a standard protocol for dealerships or
vehicle rental agencies selling/lending vehicles equipped with
Level 2/3 systems. In the future, regulation may perhaps
prohibit the handover (sale/rental) of the vehicle to a driver
that has not been briefed on the systems capabilities and
limitations.
The effects of autonowashing on vehicle automation are
varied, and therefore must be mitigated by a multifaceted
approach. Improvements in the areas of a standardized
terminology, systems design, and driver education all support
the easing of the negative impacts of autonowashing. In the
case of automated driving, delays in the advancement, release,
and acceptance of this technology are costlyfor companies,
their investors and for anyone who uses public roadways, at
risk of being one of the 1.35 million people who will die this
year in auto accidents, internationally [65].
IV. CONCLUSION
Automation, in vehicles and beyond, possesses a powerful
potential for good; but with great power comes great social
responsibility. To realize the full benefits of automation in the
long-term, systems firstly must be designed in a way that is
human-centered; then, in order to support appropriate reliance
from the beginning, they must be carefully introduced to
users.
Despite the prevalence of the concept, autonowashing has
not been given a formal name until now. The contribution of
this paper is the offering of a unified term to build consensus
around this issue. While this paper primarily addresses
autonowashing in the context of partially/semi-autonomous
vehicles, this term extends into other contexts where
automation is used, particularly those in which the
consequences of misuse are heightened and/or safety critical.
Supporting the proper adoption of automation is an effort
to improve the quality of life for the humans it serves. Giving
this problem a name and identifying autonowashing for what
it is, allows us to tackle the challenges it presents, head-on.
ACKNOWLEDGMENTS
Special thanks to Prof. Dr. Karsten Nebe, Prof. Dr.
William M. Megill, and Mr. Amrith Shanbhag for their
guidance and support.
FUNDING & CONFLICTS OF INTEREST
This research was completed independently and did not
receive any specific grant from funding agencies in the public,
commercial, or not-for-profit sectors. The author is
unaffiliated with any automotive company or institution, by
means of employment, investment or otherwise.
REFERENCES
[1] K. Shahriari and M. Shahriari, “IEEE standard
review Ethically aligned design: A vision for
prioritizing human wellbeing with artificial
intelligence and autonomous systems,” 2017 IEEE
Canada Int. Humanit. Technol. Conf., pp. 197201,
2017.
[2] C. Kerry and J. Karsten, “Gauging investment in
self-driving cars,” The Brookings Institution, 2017.
[Online]. Available:
https://www.brookings.edu/research/gauging-
investment-in-self-driving-cars/. [Accessed: 09-Oct-
2019].
[3] B. Brown and E. Laurier, “The trouble with
autopilots : Assisted and autonomous driving on the
social road,” Conf. Comput. Interact., pp. 416429,
2017.
[4] M. Rouse, “What is greenwashing?,” WhatIs.com,
2007. [Online]. Available:
https://searchcrm.techtarget.com/definition/greenwa
shing. [Accessed: 09-Sep-2019].
[5] F. Ekman, M. Johansson, and J. Sochor, “Creating
Appropriate Trust for Autonomous Vehicle
Systems: A Framework for Human-Machine
Interaction Design,” 95th Annu. Meet. Transp. Res.
Board, pp. 17, 2017.
[6] J. Lee and K. See, “Trust in Automation: Designing
for Appropriate Reliance,” Hum. Factors J. Hum.
Factors Ergon. Soc., vol. 46, no. 1, pp. 5080, 2004.
[7] R. Parasuraman and V. Riley, “Humans and
Automation: Use, Misuse, Disuse, Abuse,” Hum.
Factors J. Hum. Factors, 1997.
[8] H. Abraham, B. Seppelt, B. Mehler, and B. Reimer,
“What’s in a Name : Vehicle Technology Branding
& Consumer Expectations for Automation,” Proc.
9th ACM Int. Conf. Automot. User Interfaces
Interact. Veh. Appl. (AutomotiveUI ’17), pp. 226
234, 2017.
[9] Cambridge English Dictionary, “Definition:
Corporate Social Responsbility,” Cambridge
University Press, 2019. [Online]. Available:
https://dictionary.cambridge.org/dictionary/english/
corporate-social-responsibility. [Accessed: 26-Sep-
2019].
[10] B. Watson, “The troubling evolution of corporate
greenwashing,” The Gaurdian, 2016. [Online].
Available:
https://www.theguardian.com/sustainable-
business/2016/aug/20/greenwashing-
environmentalism-lies-companies. [Accessed: 12-
Sep-2019].
[11] TerraChoice Environmental Marketing Inc., “The
‘Six Sins of Greenwashing’ A Study of
Environmental Claims in North American
Consumer Markets,” 2007.
[12] The Insurance Institute for Highway Safety, “New
studies highlight driver confusion about automated
systems,” IIHS Research Report, 2019. [Online].
Available: https://www.iihs.org/news/detail/new-
studies-highlight-driver-confusion-about-
automated-systems. [Accessed: 06-Sep-2019].
[13] Society of Automotive Engineers (SAE)
International, “Taxonomy and Definitions for Terms
Related to Driving Automation Systems for On-
Road Motor Vehicles: J3016_201806,” 2018.
[14] Thatcham Research, “Automated Driving hype is
dangerously confusing drivers, study reveals,” 2018.
[Online]. Available:
https://news.thatcham.org/pressreleases/autonomous
-driving-hype-is-dangerously-confusing-drivers-
study-reveals-2767283. [Accessed: 19-Sep-2019].
[15] A. Roy, “The Language of Self-Driving Cars Is
Dangerous—Here’s How To Fix It,” The Drive,
2018. [Online]. Available:
https://www.thedrive.com/tech/20553/the-language-
of-self-driving-cars-is-dangerous-heres-how-to-fix-
it. [Accessed: 09-Sep-2019].
[16] J. Beller, M. Heesen, and M. Vollrath, “Improving
the driver-automation interaction: An approach
using automation uncertainty,” Hum. Factors, vol.
55, no. 6, pp. 11301141, 2013.
[17] S. M. Casner and E. L. Hutchins, “What Do We Tell
the Drivers? Toward Minimum Driver Training
Standards for Partially Automated Cars,” J. Cogn.
Eng. Decis. Mak., 2019.
[18] M. Körber, “Theoretical considerations and
development of a questionnaire to measure trust in
automation,” in 20th Triennial Congress of the IEA,
2018.
[19] B. M. Muir, “Trust in automation: Part I.
Theoretical issues in the study of trust and human
intervention in automated systems,” Ergonomics,
vol. 37, no. 11, pp. 19051922, 1994.
[20] M. Beggiato and J. F. Krems, “The evolution of
mental model, trust and acceptance of adaptive
cruise control in relation to initial information,”
Transp. Res. Part F Traffic Psychol. Behav., vol. 18,
pp. 4757, 2013.
[21] M. Nees, “Acceptance of Self-driving Cars: An
Examination of Idealized versus Realistic Portrayals
with a Self- driving Car Acceptance Scale,” Proc.
Hum. Factors Ergon. Soc. Annu. Meet., vol. 60, no.
1, pp. 14491453, 2016.
[22] V. Eckert, “Germany says Tesla should not use
‘Autopilot’ in advertising,” Reuters, 2016. [Online].
Available: https://www.reuters.com/article/us-tesla-
germany/germany-says-tesla-should-not-use-
autopilot-in-advertising-idUSKBN12G0KS.
[Accessed: 21-Sep-2019].
[23] J. McPherson, “In His 60 Minutes Appearance, Elon
Musk Was Not On The Level(s),” Forbes.com, 11-
Dec-2018. [Online]. Available:
https://www.forbes.com/sites/jimmcpherson/2018/1
2/11/in-his-60-minutes-appearance-elon-musk-was-
not-on-the-levels/. [Accessed: 19-Sep-2019].
[24] American Automobile Association, “Advanced
Driver Assistance Technology Names,” p. 21p,
2019.
[25] Lommatzsch v. Tesla, Inc. d/b/a Tesla Motors Inc.
Case No. 2:2018cv00775, in the Third District
Court For the State of Utah, Salt Lake. 2018.
[26] Sz Hua Huang et al. v. Tesla, Inc. d/b/a Tesla
Motors Inc. Case No. 19CV346663, in the Superior
Court of the State of California, Santa Clara. 2019.
[27] Banner v. Tesla, Inc. d/b/a Tesla Motors Inc. Case
No. 50-2019-CA-009962, in the Circuit Court of the
15th Judicial Circuit, Palm Beach County Florida.
2019.
[28] R. Abrams and A. Kurtz, “Joshua Brown, Who Died
in Self-Driving Accident, Tested Limits of His
Tesla,” The New York Times, 2016. [Online].
Available:
https://www.nytimes.com/2016/07/02/business/josh
ua-brown-technology-enthusiast-tested-the-limits-
of-his-tesla.html. [Accessed: 07-Oct-2019].
[29] Tesla, Model S Owner’s Manual, 2019.16.1. 2019.
[30] P. Campbell, “Volvo to put 100 British families in
driverless cars,” Financial Times, 2017. [Online].
Available: https://www.ft.com/content/5b76aba2-
0bc4-11e6-9456-444ab5211a2f#axzz470yNP3TA.
[Accessed: 25-Sep-2019].
[31] S. Glinton, “Tesla Has Begun Making All Its New
Cars Self-Driving : The Two-Way : NPR,” NPR,
2016. [Online]. Available:
https://www.npr.org/sections/thetwo-
way/2016/10/20/498753508/tesla-has-begun-
making-all-its-new-cars-self-driving. [Accessed: 07-
Oct-2019].
[32] Tesla Inc., “All Tesla Cars Being Produced Now
Have Full Self-Driving Hardware,” Tesla Blog,
2016. [Online]. Available:
https://www.tesla.com/de_DE/blog/all-tesla-cars-
being-produced-now-have-full-self-driving-
hardware. [Accessed: 21-Sep-2019].
[33] A. Fleck, “Elon Musk Defends Tesla Following
Latest Self-Driving Accident,” Adweek, 2018.
[Online]. Available:
https://www.adweek.com/digital/elon-musk-
defends-tesla-following-latest-self-driving-
accident/. [Accessed: 07-Oct-2019].
[34] A. J. Hawkins, “Fully driverless cars are on public
roads in Texas,” The Verge, 2018. [Online].
Available:
https://www.theverge.com/2018/5/17/17365188/dri
ve-ai-driverless-self-driving-car-texas. [Accessed:
25-Sep-2019].
[35] J. Saunders, “Shocking moment a Tesla driver is
filmed ASLEEP behind the wheel as his self-driving
car travels at high speeds on the California
interstate,” MailOnline, 2019. [Online]. Available:
https://www.dailymail.co.uk/news/article-
7387827/Tesla-driver-filmed-ASLEEP-wheel-self-
driving-car-travels-high-speeds-LA.html.
[Accessed: 25-Sep-2019].
[36] Tesla, “Support: Autopilot,” 2019. [Online].
Available: https://www.tesla.com/support/autopilot.
[Accessed: 18-Sep-2019].
[37] Cambridge English Dictionary, “Definition:
Autopilot,” Cambridge University Press, 2019.
[Online]. Available:
https://dictionary.cambridge.org/dictionary/english/
autopilot. [Accessed: 18-Sep-2019].
[38] The Center for Auto Safety and Consumer
Watchdog, “Request for Investigation of Deceptive
and Unfair Practices in Advertising and Marketing
of the ‘Autopilot’ Feature Offered in Tesla Motor
Vehicles.” 2018.
[39] T. B. Lee, “Elon Musk announces another price hike
for ‘full self-driving’ package,” ArsTechnica.com,
2019. [Online]. Available:
https://arstechnica.com/cars/2019/07/elon-musk-
announces-another-price-hike-for-full-self-driving-
package/. [Accessed: 21-Sep-2019].
[40] T. B. Lee, “People who paid Tesla $3,000 for full
self-driving might be out of luck,” Ars Technica,
2018. [Online]. Available:
https://arstechnica.com/cars/2018/04/why-selling-
full-self-driving-before-its-ready-could-backfire-
for-tesla/2/. [Accessed: 07-Oct-2019].
[41] Tesla Inc., “Full Self-Driving Hardware on All
Teslas | Tesla,” Tesla Videos, 2016. [Online].
Available: https://www.tesla.com/videos/full-self-
driving-hardware-all-tesla-cars. [Accessed: 07-Oct-
2019].
[42] Euro NCAP, “AUTOMATED DRIVING 2018,
Tesla Model S Highway Assist System,” 2018.
[43] @elonmusk, “Elon Musk on Twitter: ‘Tesla drives
itself (no human input at all) thru urban streets to
highway to streets, then finds a parking spot,’”
Twitter, 2016. [Online]. Available:
https://twitter.com/elonmusk/status/7890191458535
13729. [Accessed: 07-Oct-2019].
[44] CBS News, “Tesla CEO Elon Musk addresses
autopilot system safety concerns: ‘We’ll never be
perfect,’” CBS Interactive Inc., 2018. [Online].
Available: https://www.cbsnews.com/news/tesla-
ceo-elon-musk-addresses-autopilot-safety-
concerns/. [Accessed: 19-Sep-2019].
[45] Bloomberg, “Tesla Test Drive: Model P85D,
Autopilot, Zero to 60,” Bloomberg L.P., 2014.
[Online]. Available:
https://www.bloomberg.com/news/videos/2014-10-
10/driving-tesla-with-musk-zero-to-60-and-testing-
autopilot. [Accessed: 19-Sep-2019].
[46] Dean Sheikh et al. v. Tesla Inc. d/b/a Tesla Motors
Inc. Case No. 5:17-cv-02193, in the U.S. District
Court for the Northern District of California, San
Jose Division. 2017.
[47] D. Coldewey, “Tesla settles class action suit over
Autopilot claims for $5M | TechCrunch,”
TechCrunch.com, 2018. [Online]. Available:
https://techcrunch.com/2018/05/25/tesla-settles-
class-action-suit-over-autopilot-claims-for-5m/.
[Accessed: 22-Sep-2019].
[48] S. Youn, “Tesla sued for ‘defective’ Autopilot in
wrongful death suit of Florida driver who crashed
into tractor trailer,” ABC News, 2019. [Online].
Available:
https://abcnews.go.com/Technology/tesla-sued-
defective-autopilot-wrongful-death-suit-
florida/story?id=64706707. [Accessed: 06-Sep-
2019].
[49] National Transportation Safety Board, “Driver
Errors, Advanced Driver Assistance System Design,
Led to Highway Crash,” 2019. [Online]. Available:
https://www.ntsb.gov/news/press-
releases/Pages/NR20190904.aspx. [Accessed: 22-
Sep-2019].
[50] E. Taylor, “Mercedes rejects claims about
‘misleading’ self-driving car ads,” Reuters, 2016.
[Online]. Available:
https://www.reuters.com/article/us-mercedes-
marketing/mercedes-rejects-claims-about-
misleading-self-driving-car-ads-idUSKCN1081VV.
[Accessed: 09-Oct-2019].
[51] F. Lambert, “Mercedes pulls ‘self-driving car’
advert following concerns over Tesla’s use of
‘Autopilot,’” Electrek, 2016. [Online]. Available:
https://electrek.co/2016/07/29/mercedes-pull-self-
driving-car-claim-advert-tesla-autopilot/. [Accessed:
09-Oct-2019].
[52] Federal Trade Commission, “Federal Trade
Commission Act, Section 5: Unfair or Deceptive
Acts or Practices Background,” 2019.
[53] Partners for Automated Vehicle Education, “About |
PAVE Campaign,” 2019. [Online]. Available:
https://pavecampaign.org/about/. [Accessed: 27-
Sep-2019].
[54] “Let’s Talk Self-Driving,” Waymo, LLC, 2019.
[Online]. Available: https://letstalkselfdriving.com/.
[Accessed: 03-Oct-2019].
[55] IEEE, “A Vision for Prioritizing Human Well-being
with Autonomous and Intelligent Systems, First
Edition,” IEEE Glob. Initiat. Ethics Auton. Intell.
Syst., p. 292, 2019.
[56] C. Mullen, “Reaching Zero Crashes: A Dialogue on
the Role of Current Advanced Driver Assistance
Systems,” in National Transportation Safety Board,
2016.
[57] V. A. Banks, A. Eriksson, J. O’Donoghue, and N.
A. Stanton, “Is partially automated driving a bad
idea? Observations from an on-road study,” Appl.
Ergon., no. 68, pp. 138145, 2018.
[58] H. Abraham, H. McAnulty, B. Mehler, and B.
Reimer, “Case Study of today’s automotive
dealerships: Introduction and delivery of advanced
driver assistance systems,” Transp. Res. Rec., vol.
2660, no. August, pp. 714, 2017.
[59] International Organization for Standardization, “ISO
9241-110:2006 Ergonomics of human-system
interaction Part 110: Dialogue principles.”
International Standards Organization, p. 22, 2006.
[60] Society of Automotive Engineers (SAE)
International, “SAE International Releases Updated
Visual Chart for Its ‘Levels of Driving Automation’
Standard for Self-Driving Vehicles,” SAE.org, 2018.
[61] Cadillac, “CT6 Owner’s Manual.” 2019.
[62] T. Mousel, A. Treis, and IEE S.A., “Hands Off
Detection Requirements for UN R79 Regulated
Lane Keeping Assist Systems,” 2017.
[63] J. Stewart, “Elon Musk Abuses Tesla Autopilot on
60 Minutes,” Wired.com, 2018. [Online]. Available:
https://www.wired.com/story/elon-musk-tesla-
autopilot-60-minutes-interview/. [Accessed: 22-Sep-
2019].
[64] G. L. Urban, “The Trust Imperative,” 2003.
[65] World Health Organization, “Global status report on
road safety 2018,” World Health Organization,
2019.
Article
Full-text available
Current research on autonomous vehicles tends to focus on making them safer through policies to manage innovation, and integration into existing urban and mobility systems. This article takes social, cultural and philosophical approaches instead, critically appraising how human subjectivity, and human-machine relations, are shifting and changing through the application of big data and algorithmic techniques to the automation of driving. 20th century approaches to safety engineering and automation—be it in an airplane or automobile-have sought to either erase the human because she is error-prone and inefficient; have design compensate for the limits of the human; or at least mould human into machine through an assessment of the complementary competencies of each. The ‘irony of automation’ is an observation of the tensions emerging therein; for example, that the computationally superior and efficient machine actually needs human operators to ensure that it is working effectively; and that the human is inevitably held accountable for errors, even if the machine is more efficient or accurate. With the emergence of the autonomous vehicle (AV) as simultaneously AI/ ‘robot’, and automobile, and distributed, big data infrastructural platform, these beliefs about human and machine are dissolving into what I refer to as the ironies of autonomy. For example, recent AV crashes suggest that human operators cannot intervene in the statistical operations underlying automated decision-making in machine learning, but are expected to. And that while AVs promise ‘freedom’, human time, work, and bodies are threaded into, and surveilled by, data infrastructures, and re-shaped by its information flows. The shift that occurs is that human subjectivity has socio-economic and legal implications and is not about fixed attributes of human and machine fitting into each other. Drawing on Postphenomenological concepts of embodiment and instrumentation, and excerpts from fieldwork, this article argues that the emergence of AVs in society prompts a rethinking of the multiple relationalities that constitute humanity through machines.
Conference Paper
Full-text available
The increasing number of interactions with automated systems has sparked the interest of researchers in trust in automation because it predicts not only whether but also how an operator interacts with an automation. In this work, a theoretical model of trust in automation is established and the development and evaluation of a corresponding questionnaire (Trust in Automation, TiA) are described. Building on the model of organizational trust by Mayer, Davis, and Schoorman (1995) and the theoretical account by Lee and See (2004), a model for trust in automation containing six underlying dimensions was established. Following a deductive approach, an initial set of 57 items was generated. In a first online study, these items were analyzed and based on the criteria item difficulty, standard deviation, item-total correlation, internal consistency, overlap with other items in content, and response quote, 40 items were eliminated and two scales were merged, leaving six scales (Reliability/Competence, Understandability/Predictability, Propensity to Trust, Intention of Developers, Familiarity, and Trust in Automation) containing a total of 19 items. The internal structure of the resulting questionnaire was analyzed in a subsequent second online study by means of an exploratory factor analysis. The results show sufficient preliminary evidence for the proposed factor structure and demonstrate that further pursuit of the model is reasonable but certain revisions may be necessary. The calculated omega coefficients indicated good to excellent reliability for all scales. The results also provide evidence for the questionnaire's criterion validity: Consistent with the expectations, an unreliable automated driving system received lower trust ratings as a reliably functioning system. In a subsequent empirical driving simulator study, trust ratings could predict reliance on an automated driving system and monitoring in form of gaze behavior. Possible steps for revisions are discussed and recommendations for the application of the questionnaire are given. It has become impossible to evade automation: Thanks to the technological progress made, many functions that were previously carried out by humans can now be fully or partially replaced by machines (Parasuraman, Sheridan, & Wickens, 2000). As a consequence, they are taking over more and more functions in work and leisure environments of all kinds in our day-today lives. The resulting increase in the number of interactions with automated systems has sparked the interest of human factors researchers to investigate trust in automation with the overall goal to ensure safe and
Article
Full-text available
The automation of longitudinal and lateral control has enabled drivers to become “hands and feet free” but they are required to remain in an active monitoring state with a requirement to resume manual control if required. This represents the single largest allocation of system function problem with vehicle automation as the literature suggests that humans are notoriously inefficient at completing prolonged monitoring tasks. To further explore whether partially automated driving solutions can appropriately support the driver in completing their new monitoring role, video observations were collected as part of an on-road study using a Tesla Model S being operated in Autopilot mode. A thematic analysis of video data suggests that drivers are not being properly supported in adhering to their new monitoring responsibilities and instead demonstrate behaviour indicative of complacency and over- trust. These attributes may encourage drivers to take more risks whilst out on the road.
Conference Paper
Full-text available
Vehicle technology naming has the potential to influence drivers’ expectations (mental model) of the level of autonomous operation supported by semi-automated technologies that are rapidly becoming available in new vehicles. If divergence exists between expectations and actual design specifications, it may make it harder to develop trust or clear expectations of systems, thus mitigating potential benefits. Alternately, over-trust and misuse due to misunderstanding increase the potential for adverse events. An online survey investigated whether and how names of advanced driver assistance systems (ADAS) and automation features relate to expected automation levels. Systems with “Cruise” in their names were associated with lower levels of automation. “Assist” systems appeared to create confusion between whether the driver is assisting the system or vice versa. Survey findings indicate the importance of vehicle technology naming and its impact in influencing drivers’ expectations of responsibility between the driver and system in who performs individual driving functions.
Article
Full-text available
Vehicle manufacturers have developed advanced driver assistance systems (ADASs) to reduce driver workload and enhance safety. The delivery of these systems to consumers occurs through dealerships not owned by manufacturers. Limited research is available on how dealerships provide consumers with information and training on ADASs. In an exploratory study seeking information on new safety technologies, semi-structured blind interviews of salespeople at 18 Boston, Massachusetts, area dealerships were conducted in the context of potential vehicle purchase across six vehicle brands. Although some dealerships were making concerted efforts to introduce and educate customers on ADASs, a number of salespeople interviewed were not well positioned to provide adequate information to their customers. In select instances, salespeople explicitly provided inaccurate information on safety-critical systems. The dealerships in the sample that represented mass-market brands (Ford and Chevrolet) were the poorest performers. Sales staff at Subaru dealers were well trained and had print and digital content to drive consumer engagement. Educational staff, or “geniuses,” at BMW dealers presented a potentially innovative way of segmenting the sales process from technology education. In the absence of some technology introduction and education at dealerships, consumers may remain underinformed or misinformed about the disruptive safety technologies that are rapidly being introduced across the vehicle fleet.
Article
Full-text available
Despite enthusiastic speculation about the potential benefits of self-driving cars, to date little is known about the factors that will affect drivers’ acceptance or rejection of this emerging technology. Gaining acceptance from end users will be critical to the widespread deployment of self-driving vehicles. Long-term acceptance may be harmed if initial acceptance is built upon unrealistic expectations developed before people interact with these systems. A brief (24-item) measurement scale was created to assess acceptance of self-driving cars. Before completing the scale, participants were randomly assigned to read short vignettes that featured either a realistic or an idealistic description of a friend’s experiences during the first six months of owning a self-driving car. A small but significant effect showed that reading an idealized portrayal in the vignette resulted in higher acceptance of self-driving cars. Potential factors affecting user acceptance of self-driving cars are discussed. Establishing realistic expectations about the performance of automation before users interact with self-driving cars may be important for long-term acceptance.
Article
Each year, millions of automobile crashes occur when drivers fail to notice and respond to conflicts with other vehicles, bicyclists, and pedestrians. Today, manufacturers race to deploy automation technologies to help eliminate these mishaps. To date, little effort has been made to educate drivers about how these systems work or how they affect driver behavior. Driver education for automated systems amounts to additional pages in an owner’s manual that is known to be a seldom-used glove box reference. In this article, we review the history of automation deployed in the airline cockpit decades ago. We describe how automation helped avoid many common crash scenarios but at the same time gave rise to new kinds of crashes. It was only following a concerted effort to educate pilots about the automation, about themselves, and about the concept of a human-automation team that we reached the near-zero crash rate we enjoy today. Drawing parallels between the automation systems, the available pilot and driver research, and operational experience in both airplanes and automobiles, we outline knowledge standards for drivers of partially automated cars and argue that the safe operation of these vehicles will be enhanced by drivers’ incorporation of this knowledge in their everyday travels.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.
Article
The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.