ArticlePDF Available

Exploring trust in self-driving vehicles with text analysis

Authors:

Abstract and Figures

Objective: This study examined attitudes toward self-driving vehicles and the factors motivating those attitudes. Background: Self-driving vehicles represent potentially transformative technology, but achieving this potential depends on consumers' attitudes. Ratings from surveys estimate these attitudes, and open-ended comments provide an opportunity to understand their basis. Method: A nationally representative sample of 7,947 drivers in 2016 and 8,517 drivers in 2017 completed the J.D. Power U.S. Tech Choice Study SM , which included a rating for level of trust with self-driving vehicles and associated open-ended comments. These open-ended comments are qualitative data that can be analyzed quantitatively using structural topic modeling. Structural topic modeling identifies common themes, extracts prototypical comments for each theme, and assesses how the survey year and rating affect the prevalence of these themes. Results: Structural topic modeling identified 13 topics, such as "Tested for a long time," which was strongly associated with positive ratings, and "Hacking & glitches," which was strongly associated with negative ratings. The topics of "Self-driving accidents" and "Trust when mature" were more prominent in 2017 compared with 2016. Conclusion: Structural topic modeling reveals reasons underlying consumer attitudes toward vehicle automation. These reasons align with elements typically associated with trust in automation, as well as elements that mediate perceived risk, such as the desire for control as well as societal, relational, and experiential bases of trust. Application: The analysis informs the debate concerning how safe is safe enough for automated vehicles and provides initial indicators of what makes such vehicles feel safe and trusted.
Content may be subject to copyright.
Objective: This study examined attitudes toward self-
driving vehicles and the factors motivating those attitudes.
Background: Self-driving vehicles represent poten-
tially transformative technology, but achieving this potential
depends on consumers’ attitudes. Ratings from surveys esti-
mate these attitudes, and open-ended comments provide an
opportunity to understand their basis.
Method: A nationally representative sample of 7,947
drivers in 2016 and 8,517 drivers in 2017 completed the J.D.
Power U.S. Tech Choice StudySM, which included a rating for
level of trust with self-driving vehicles and associated open-
ended comments. These open-ended comments are qualita-
tive data that can be analyzed quantitatively using structural
topic modeling. Structural topic modeling identifies common
themes, extracts prototypical comments for each theme, and
assesses how the survey year and rating affect the prevalence
of these themes.
Results: Structural topic modeling identified 13 topics,
such as “Tested for a long time,” which was strongly asso-
ciated with positive ratings, and “Hacking & glitches,” which
was strongly associated with negative ratings. The topics of
“Self-driving accidents” and “Trust when mature” were more
prominent in 2017 compared with 2016.
Conclusion: Structural topic modeling reveals reasons
underlying consumer attitudes toward vehicle automation.
These reasons align with elements typically associated with
trust in automation, as well as elements that mediate per-
ceived risk, such as the desire for control as well as societal,
relational, and experiential bases of trust.
Application: The analysis informs the debate concerning
how safe is safe enough for automated vehicles and provides
initial indicators of what makes such vehicles feel safe and
trusted.
Keywords: perceived risk, dread risk, vehicle automation,
survey analysis, risk analysis, consumer acceptance
INTRODUCTION
Self-driving vehicles have the potential to
transform transportation. Because transporta-
tion plays such a central role in employment,
lifestyle, health, and even the structure of cit-
ies, this transformation will have widespread
economic and social consequences. Like other
major technology-induced transformations, con-
sumers’ attitudes toward the technology will
strongly influence its ultimate success or failure.
Modern vehicles already include sophisti-
cated automation, such as adaptive cruise con-
trol and lane-keeping assistance. These systems
partially automate some aspects of driving, but
the driver remains responsible and in control.
Self-driving technology might remove the steer-
ing wheel and pedals and transform drivers to
riders. Understanding the factors that affect
acceptance of this new role represents an impor-
tant concern for designers and policymakers.
Trust has emerged as a critical variable medi-
ating the relationship between people and tech-
nology across domains that include process con-
trol automation, human–robot interaction, and
decision aids (Hancock et al., 2011; Hoffman
et al., 2009; Lee & See, 2004). With vehicle
automation, trust is relevant in different ways for
different types of automation. For automation
that assists drivers and requires them to remain
responsible for driving, overtrusting the automa-
tion leads to slower and less effective interven-
tions (Beggiato & Krems, 2013; Verberne, Ham,
& Midden, 2015). For self-driving vehicles—
where the vehicle is responsible for driving—
the complexity, risk, and limited opportunity for
control might lead to undertrusting the automa-
tion (Claybrook & Kildare, 2018; Kaur &
Rampersad, 2018). Lack of trust may leave driv-
ers susceptible to dread risk—a heightened feel-
ing of risk when the risk is uncontrollable,
not understandable, and has dire consequences
872672HFSXXX10.1177/0018720819872672Human FactorsTrust in Self-Driving Vehiclesresearch-article2019
Address correspondence to John D. Lee, Department of
Industrial & Systems Engineering, College of Engineering,
University of Wisconsin-Madison, 1513 University
Avenue, Madison, WI 53706, USA; e-mail: jdlee@engr
.wisc.edu.
Exploring Trust in Self-Driving Vehicles
Through Text Analysis
John D. Lee , University of Wisconsin-Madison, USA and
Kristin Kolodge, J.D. Power, Troy, Michigan, USA
HUMAN FACTORS
Vol. XX, No. X, Month XXXX, pp. 1 –18
DOI: 10.1177/0018720819872672
Article reuse guidelines: sagepub.com/journals-permissions
Copyright © 2019, Human Factors and Ergonomics Society.
2 Month XXXX - Human Factors
(Gigerenzer, 2004; Slovic, 1987; Sunstein &
Zeckhauser, 2010). Dread risk leads to dispro-
portionate negative responses to adverse events
and can undermine technology acceptance.
Thus, cultivating trust and mitigating dread risk
are critical for ensuring the long-term success of
self-driving vehicles.
As with other types of automation, three
aspects of self-driving technology will likely
establish its trustworthiness and will inform
feelings of trust: its purpose, the process under-
lying its operation and creation, and its perfor-
mance over time (Lee & See, 2004). These
aspects of automation can be inferred through
experience with the system itself—dynamic
learned trust—or with similar systems—initial
learned trust (Hoff & Bashir, 2015). This sug-
gests that initial trust of self-driving vehicles
will depend on people’s experience with analo-
gous systems, most likely their own experiences
as a driver interacting with increasingly sophis-
ticated vehicle technology and their experience
with computers.
Many studies have considered trust in par-
tially automated vehicles, but existing research
regarding people’s trust in self-driving vehicles
is relatively limited. Several simulator studies
have considered how trust depends on lane
tracking precision (Price, Venkatraman, Gibson,
Lee, & Mutlu, 2016), and how trust and comfort
depends on the aggressiveness of vehicle control
algorithms (Bellem, Thiel, Schrauf, & Krems,
2018; Lee, Liu, Domeyer, & Dinparastdjadid, in
press). Others have considered how interface
details affect trust, such as verbal messages that
state why the automation responded as it did
(Koo et al., 2015). Several surveys have focused
on self-driving vehicles and found that trust is
strongly associated with the intention use self-
driving vehicles, that trust moderates perceived
risk (Choi & Ji, 2015), and that trust depends on
the reliability of the technology (Kaur & Ramp-
ersad, 2018). A cross-national survey also found
that trust was an important determinant of driv-
erless vehicle acceptance (Nordhoff, de Winter,
Kyriakidis, van Arem, & Happee, 2018).
Surveys provide a valuable method to assess
initial trust and estimate how people might
respond to self-driving vehicles during their ini-
tial deployment. Surveys frequently include
quantitative data, such as Likert-type ratings,
along with qualitative data, such as open-ended
comments. Ratings can be analyzed with tradi-
tional statistical methods to assess attitudes, but
such analysis fails to explain the basis for those
attitudes. Open-ended comments can explain the
basis of those attitudes but present a challenge
for analysis. Systematic analysis of comments
typically requires hand coding and qualitative
analysis techniques. These techniques are time-
consuming and depend on the personal and the-
oretical perspectives of the analysts (Mays &
Pope, 2000).
Text analysis provides quantitative methods
to analyze open-ended survey data (Roberts
et al., 2014) and might reveal what factors
underlie consumer trust in self-driving vehicles.
Techniques, such as topic modeling, treat words
as data and make it possible to extract insights
from hundreds, or hundreds of thousands of
comments, with the efficiency and transparency
that traditional statistical techniques provide for
ratings data. Text analysis extracts meaning
from a collection of documents by estimating
the latent or unobserved topics in documents
(Dumais & Landauer, 1997).
Topic modeling identifies topics as distribu-
tions of words, with those words most related to
the topic having higher probabilities. A distribu-
tion of these topics describes each document,
with those topics most related to the document
having higher probabilities (Blei, 2012). Because
topic models are mixed membership models,
each word contributes to multiple topics and
each topic contributes to multiple documents.
Topic modeling identifies the words associated
with topics, the topics associated with docu-
ments, and the overall proportion of each topic
across the documents. For our analysis, the doc-
uments are open-ended comments that accom-
pany ratings and the topics are themes that
appear across these comments.
Typically, topic modeling assumes that topics
are independent. It also assumes that the proba-
bility distribution of the words in a topic is inde-
pendent of any metadata that might describe the
comment, such as the value of rating associated
with the comment. Structural topic modeling
builds on topic modeling by relaxing these
assumptions, which makes it possible to examine
TrusT in self-Driving vehicles
3
the effect of covariates on the distribution of top-
ics across documents and the distribution of
words within topics (Roberts, Stewart, & Airoldi,
2016). For our analysis, the covariates are the
Likert-type rating and the year of the survey.
This article uses structural topic modeling to
analyze the open-ended comments associated
with Likert-type ratings of trust in self-driving
vehicles. It extends a previous analysis con-
ducted on a 2016 survey with data from a 2017
survey (Lee & Kolodge, 2018), which makes it
possible to assess how attitude of a population
changed year over year. Our general hypothesis
is that the topics will reveal aspects of trust and
risk that underlie drivers’ attitudes toward self-
driving vehicles. The association of topics with
ratings can help explain why some people rate
self-driving vehicles positively and others rate
them negatively. The association of topics with
the year of the survey can reveal changes in how
people think about self-driving vehicles.
METHOD
The J.D. Power 2016 U.S. Tech Choice Stu-
dySM (fielded in February and March) was an
online panel survey focusing on vehicle owners
that purchased or leased a new vehicle in the
previous 5 years yielding 7,947 respondents
and was repeated in 2017 (fielded in January
and February) with 8,517 respondents. Both
samples were nationally representative, and no
effort was made to include the same respondents
in both surveys. Panelists were provided with
an incentive from their respective panel com-
pany. Sample quotas were established based on
vehicle make. The U.S. Tech Choice StudySM
examined consumer awareness, interest, and
price elasticity of various future and emerging
technologies. The survey items covered tech-
nology categories including: Entertainment &
Connectivity, Comfort & Convenience, Driving
Assistance, Collision Protection, Navigation,
and Energy Efficiency. The survey also included
items regarding consumer interest in emerging
concepts such as alternative mobility solutions,
cybersecurity threats, and trust in automated
technologies.
The purpose of the study was to examine
consumers’ interest and purchase intention with
numerous advanced technologies, including
full self-driving automation. Survey questions
included both quantitative and open-ended com-
ments to understand consumer sentiment. There
were two parts to each item: a Likert-type scale
response defined by: “Definitely would not,”
“Probably would not,” “Probably would,” “Def-
initely would,” “Don’t know” followed up by a
free-form text box. Our analysis focuses on the
ratings and open-ended responses to the ques-
tion: “How much would you trust the ability of a
vehicle equipped with self-driving technology to
operate without a human driver’s input?” Of the
16,464 surveys, 15,568 (7,459 for 2016 and
8,109 for 2017) had valid responses to this item,
which excluded the “Don’t know” responses; of
these 6,489 (3,105 for 2016 and 3,384 for 2017)
had sufficient text analysis of open-ended
responses.
Open-ended comments need to be processed
before analysis because they include many typo-
graphic errors and misspellings. In addition,
similar words or phrases are represented differ-
ently, such as “self driving,” “self-driving,” and
“selfdriving” or “tech” and “technology.” This
processing also includes converting all words to
lower case, removing stop words, numbers,
punctuation, spellchecking, and stemming.
Stemming removes the final letters of the word
to reduce words with a similar meaning to a
common stem, such as “vehicles” to “vehicle”
and “hacking” to “hack.” Words that occurred
fewer than 12 times across all the comments
were removed. Only comments with more than
nine characters were retained leaving 6,489
comments for analysis. These remaining com-
ments varied substantially in length, with a 25th,
50th, and 75th percentile of 24, 46, and 84 char-
acters respectively. Such preprocessing can
affect the topics composition, particularly with
short survey comments. Here, increasing the fre-
quency threshold of words and the minimum
length of comments led to more stable topics.
The statistical programming language R was
used for the analysis. The ggplot2 package was
used for graphs (Wickham, 2016), tidytext for
initial text processing (Silge & Robinson,
2016), and the stm package for the structural
topic modeling (Roberts et al., 2014). Com-
ments form the unit of analysis for paper. The
stm package estimates topics and the prevalence
4 Month XXXX - Human Factors
of these topics across comments and covariates,
such as ratings or the year of the survey. It also
produces point estimates of the prevalence for
levels of the covariates and credible intervals
for these point estimates based on the posterior
distribution of the parameter estimates.
RESULTS
Likert-Type Ratings of Trust in
Automated Vehicles
Figure 1 summarizes the Likert-type ratings
and shows that people generally do not trust
self-driving vehicles. Ordinal Bayesian regres-
sion using brms (Bürkner, 2017) showed that
the proportion of people who “definitely would
not” trust self-driving vehicles was greater in
2017 than it was in 2016. The analysis indicates
a Bayes factor of 6.3 for survey year, which rep-
resents “positive” evidence of an effect (Kass &
Raftery, 1995).
Topic Model Selection
Like other statistical modeling approaches,
topic model selection involves an iterative pro-
cess of model verification and refinement. This
process typically involves expert evaluation of
the words that comprise each topic and the doc-
uments that are associated with those topics, and
so inspection of models through visualization is
a critical aspect of the model selection process
(Chuang, Manning, & Heer, 2012). The primary
model parameter is the number of topics used
to describe the documents. This can range from
50 to 100 topics for corpus of journal articles,
but may be only a few topics for open-ended
survey responses (Roberts et al., 2014). Model
fitting begins with selecting a range of the num-
ber of topics that reflects the research question
and then assessing the associated models with
several criteria.
No precise criteria exist for selecting the
number of topics to represent the data, but four
metrics generally guide model selection: exclu-
sivity, coherence, residual variance, and held-
out likelihood. Exclusivity refers to the degree
that each topic is composed of terms that are not
shared with other topics. Coherence refers to the
degree that similar words comprise a topic. Typ-
ically, increasing the number of topics reduces
coherence, but increases exclusivity. Similar to a
typical regression analysis, the residual variance
indicates deviations from the topic model and
smaller values of the residual indicate a better
Figure 1. Response to the survey item “How much would you trust the ability of a
vehicle equipped with self-driving technology to operate without a human driver’s
input?” The points show the mean values and 95% confidence interval of the posterior
distribution of ordinal Bayesian regression.
TrusT in self-Driving vehicles
5
model fit. The held-out likelihood reflects how
well the model predicts data that were not
included when estimating its parameters and
larger values indicate a superior model. Typi-
cally, increasing the number of topics will reduce
residual variance, but can lead to overfitting,
which is indicated by corresponding reductions
in the held-out likelihood.
These four considerations guide the choice of
the number of topics, but ultimately the choice
depends on the judgment of the analyst as to
whether the topics reveal useful information
about the comments. We considered models
with eight to 24 topics. The 13-topic model was
chosen because it had much greater exclusivity
with only slightly more coherence compared
with the 11- and 12-topic models, and it had
greater coherence with only slightly less exclu-
sivity compared with the 14- and 15-topic mod-
els. Considering the residual and held-out likeli-
hood, the 13-topic model had a higher held-out
likelihood than all the models and a lower resid-
ual than all but the 14-topic model. The topics
from the 13-topic model were also judged to be
meaningful.
Figure 2 shows the exclusivity, coherence,
residual variance, and held-out likelihood for the
models considered, and the lines represent
Pareto frontiers. Pareto frontiers show a set of
the models that trade-off the performance crite-
ria differently. For example, the 15-topic model
has high coherence, but relatively low exclusiv-
ity. The 17-topic model has lower coherence, but
higher exclusivity. Both of these models are bet-
ter than the 14- and 16-topic models. For the
graph on the left, points below the line represent
inferior models meaning there is an alternative
that is superior in terms of both exclusivity and
coherence for any point under the line. For the
graph on the right, only the 13- and 14-topic
models merit consideration—all other models
have greater residual and lower held-out likeli-
hood. The multiple criteria needed to select a
model highlights the complexity of model selec-
tion that is often masked by the allure of p values
as a seemingly objective basis for model selec-
tion (Wasserstein & Lazar, 2016).
Topic Content
Structural topic modeling identifies topics
in an unsupervised manner—it discovers topics
from the data rather than confirms predefined
categories. The meaning of the topics must be
identified by inspecting the terms that comprise
the topics and exemplar comments that include
a high proportion of each topic.
Several measures identify terms that define
topics: Prob, FREX, Lift, and Score. Prob is the
Figure 2. Pareto frontier for selecting the number of topics to include in the analysis. Each point represents
a model with the indicated number of topics. Points on the line represent dominant alternatives.
6 Month XXXX - Human Factors
probability that a term occurs in the topic. FREX
(Frequency and Exclusivity) identifies terms
that occur frequently in a topic and are also
exclusive to that topic. Like FREX, Lift weights
words more heavily if they occur infrequently in
other topics. Score weights words by dividing
the logarithm of their frequency by the loga-
rithm of their frequency in other topics. Although
FREX, Lift, and Score all consider how words
occur in a topic, they do so in slightly different
ways, which leads to different words to define
topics.
The numeric values of Prob, FREX, Lift, and
Score were used to rank the words, and Table 1
shows the seven highest-ranked words for each
topic. The right column shows three exemplar
open-ended comments for each topic that have
a high proportion of the topic. Based on key-
words and comments, we identified a label for
each topic (Roberts et al., 2014). For example,
“Works good?” was used to label comments
suggesting such systems often work well, but
can be prone to malfunctions in some situations.
The words “work,” “good,” and “malfunctions”
all feature prominently as keywords and so
“Works good” with a question mark was
selected as a label. Overall, the terms and com-
ments define topics that address separate and
distinct bases for consumers’ attitudes toward
self-driving vehicles. These topics provide a
window into how people think about self-driv-
ing vehicles; some of which suggest an attitude
of optimism and acceptance and others, skepti-
cism and distrust.
Topic Links
Topics tend to co-occur within comments,
and this co-occurrence defines the correlation
between topics. Each comment included a pro-
portion of each topic, but most comments are
primarily associated with one topic and included
more than 15% of only one or two other topics.
For example, the comment “because people
r dumb and talk on phones while driving so
a robot driver cant be any worse” is primar-
ily comprised of Topic 7 “Scary drivers and
robots” but also includes some of Topic 8 “Safer
than humans.” Figure 3 shows the correla-
tions with an absolute value greater than .30 as
links between topics, and the size of the nodes
reflects the prevalence of each topic across all
the comments. The nodes are positioned with
multi-dimensional scaling. Three notable areas
of this diagram merit attention. The strongly
linked topics of “Trust when mature,” “Technol-
ogy improving,” and “Tested for a long time”
suggest a conditional, but optimistic view of
self-driving vehicles that is predicated on com-
prehensive testing and technological advances.
Two topics suggest that some people think
about safety in a relative way and believe it to
be “Safer than human” and worry about “Scary
drivers and robots.” Two other topics reflect a
discomfort with the prospect of giving control
over to automation: “Feel uncomfortable” and
“Control until proven.”
Association of Ratings and the Year
of the Survey and Topic Prevalence
As the node size in the network diagram
indicates, the prevalence of the topics is not
uniform. Figure 4 shows that the topic of “Safer
than humans” is more prevalent than the top-
ics of “Hacking & glitches” and “Computers
make mistakes.” The vertical line indicates
the mean prevalence—the prevalence if all the
topics were equally represented—and visually
indicates which topics are over- and underrep-
resented.
The general hypothesis guiding this analysis
is that the topics might describe the basis for the
ratings and describe how drivers’ attitudes
change over time. Figure 5 shows the difference
in prevalence of each topic, which was calcu-
lated by subtracting the mean prevalence of each
topic from the prevalence of the topic in each
year. The difference in prevalence for many top-
ics is similar for the two years of the survey, but
topics associated with trust and self-driving
vehicle crashes were more prevalent in 2017
than 2016 and those associated with computer
mistakes and how well self-driving vehicles
work were less common. For the topic of “Trust
when mature,” the topic prevalence changes
from approximately 6.6% in 2016 to 8.4% in
2017—a 27% increase in the prevalence of this
topic from 2016 to 2017.
Figure 6 places the effect of year of the sur-
vey into context by showing how the prevalence
of topics depends on the rating and the year of
7
TAbLE 1: Topics, Keywords, and Related Comments
Topic and Keywords Exemplar Consumer Comments, as Written
Topic 1: Many things go wrong
Prob: car, thing, wrong, safety, ready, sounds, total
FREX: thing, wrong, ready, sounds, foolproof, total, car
Lift: ready, sounds, thing, wrong, freaks, automatic, faith
Score: car, wrong, thing, freaks, safety, sounds, ready
“Cruise control and maybe some of the other features would be ok but
so many things can go wrong and have in the past so it’s best to have a
human driver have input”
“too many things can go wrong and I want total control of the car”
“I don’t trust a self-driving car. Something can go wrong and the car owner
may not know how to fix or it what to do”
Topic 2: Errors and failures
Prob: errors, failure, hackers, made, chances, electronics, mechanical
FREX: errors, failure, hackers, made, chances, electronics, mechanical
Lift: expect, power, equipment, hackers, information, overriding, world
Score: room, errors, failure, mechanical, chances, hackers, electronics
“too much chance for error. driver errors happen, but a mechanical error
could be terrible”
“It’s man made, program by man subject to many errors”
“a human had to have made it to start and there is room for human error”
Topic 3: Trust when mature
Prob: trust, years, advanced, perfect, point, life, change
FREX: trust, point, years, life, early, unproven, perfect
Lift: mature, trustworthy, unproven, breakdown, early, unsure, untested
Score: trust, mature, advanced, years, perfect, point, life
“I do not trust this type technology at this point of development”
“wouldn’t trust it 100 percent, but gps are pretty good and so would trust
the self-driving technology as much as i trust gps”
“I trust the technology will be advanced to the point that it will be safe when
it’s available for purchase”
Topic 4: Technology improving
Prob: technology, fail, putting, improved, develop, fully, interested
FREX: develop, interested, technology, fully, fail, proof, improved
Lift: extensive, develop, approved, level, proof, interested, fine
Score: technology, extensive, fail, approved, develop, improved, fully
“Technology has advanced and improved at such a phenomenal rate that
I trust that their would be very few problems associated with this. This
technology already exists in some capacity already”
“Technology is at a peak and still improving. And all new technologies MUST
secure federal permission to function, and NO sane person/agency would
approve the use of an unsafe technology”
“This country continues to increase the level of technology and the security
level is very high”
Topic 5: Hacking & glitches
Prob: vehicle, hacking, happen, danger, concerns, glitch, problems
FREX: hacking, danger, glitch, potential, happen, crash, software
Lift: danger, easy, hacking, scares, trouble, glitch, potential
Score: hacking, vehicle, easy, happen, danger, glitch, problems
“Security protection is paramount. Already thieves are able to steal vehicles
with password cracking devices. Someone with malicious intent could hack
into the vehicle and take over control . . . nope, not happening with me”
“Anyone could hack the computer system and then use that vehicle to
harm others. It could be used to harm many or just drive the car with it’s
passengers to a dangerous situation.”
(continued)
8
Topic and Keywords Exemplar Consumer Comments, as Written
“System could be hacked. Computer could fail. Never seen a system without
a glitch; what happens when the glitch occurs?”
Topic 6: Computers make mistakes
Prob: computers, makes, machines, people, program, mistakes, break
FREX: machines, mistakes, computers, hands, break, wheel, sense
Lift: lazy, mistakes, nervous, sense, wheel, break, computers
Score: computers, machines, nervous, mistakes, makes, people,
program
“cause machine make mistakes too cause people program them”
“Because it is a machine, and machines are programmed by people who can
not imagine every instance when programming them. Additionally, they
break”
“computers can make mistakes”
Topic 7: Scary drivers and robots
Prob: driving, driver, input, scary, robot, prefer, smart
FREX: scary, robot, prefer, driver, pay, attention, input
Lift: touch, attention, prefer, robot, scary, instincts, pay
Score: driving, driver, input, touch, scary, prefer, robot
“I think that this could cause the driver to not pay enough attention while
driving. Too many distractions.”
“because people r dumb and talk on phones while driving so a robot driver
cant be any worse”
“I’m human. this robot [expletive] is new and scary.”
Topic 8: Safer than human
Prob: human, safer, variables, experience, idea, decisions, distraction
FREX: safer, day, idea, judgment, variables, experience, mind
Lift: judgment, day, eliminate, faster, fun, mind, respond
Score: human, fun, safer, idea, decisions, variables, distraction
“HUMANS ARE DISTRACTED AND EMOTIONAL. AUTODRIVE COULD BE
FASTER AND SAFER”
“the computer is faster than the human brain and can respond quicker in taffic”
“The AI plus the semiconductor can respond than human being quickly. And
it is a machine not like human emotional”
Topic 9: Control until proven
Prob: control, safe, proven, reliable, issues, giving, worried
FREX: proven, issues, giving, worried, safe, control, cost
Lift: ensure, giving, guarantees, issues, liability, limit, worried
Score: control, safe, proven, ensure, reliable, issues, giving
“I do not like giving up control and do not think the tech has progressed
sufficiently at this point to be safe and secure”
“not proven fail safe”
“not proven enough yet,, would feel out of control”
Topic 10: Tested for long time
Prob: tested, time, long, future, completely, depended, properly
FREX: tested, long, future, depended, auto, company, assume
Lift: open, public, assume, accurately, answer, buy, company
Score: tested, time, answer, future, long, depended, public
“I know it’s already being tested. I believe comprehensive testing would be
performed to ensure safety prior to being released to the public”
TAbLE 1: (continued)
(continued)
9
Topic and Keywords Exemplar Consumer Comments, as Written
“It seems safe but would have to be tested a long period of time before I’d
fully agree”
“The technology has been being developed and tested for a long time and
will probably be very safe by the time it is allowed on the road”
Topic 11: Works good?
Prob: work, system, malfunction, good, situations, great, react
FREX: good, situations, great, malfunction, work, react, hard
Lift: excellent, judge, good, great, hard, react, situations
Score: work, malfunction, system, good, judge, situations, react
“It works and has been shown to work”
“I think it can work, but I do worry some because it doesn’t have the process
of thinking about what can happen in a situation and then correcting”
“Technology is great—when it works! But, if there is too much left to
technology, it’s bound to break or malfunction sooner or later. And,
when it does you could be placed or place others in dangerous or unsafe
situations”
Topic 12: Self-driving accidents
Prob: road, self-driving, accidents, lot, feature, conditions, responsible
FREX: road, self-driving, accidents, conditions, automated, traffic,
current
Lift: fashioned, pedestrian, prevent, slow, state, Tesla, unpredictable
Score: prevent, road, self-driving, accidents, feature, lot, traffic
“The Nation already has to many automobile accidents and traffic on streets
and highways are already to heavy, thus having driverless cars will most
likely increase both the number of accidents and increase the traffic on
streets and highways”
“We have already seen accidents involving driverless cars, going at a speed
of 30 mph or less. No thank you. I won’t ride in a driverless car”
“I think its likely possible, but I also have heard about the accidnet with
Teslas self driving vehicle so Im a little reluctant”
Topic 13: Feel uncomfortable
Prob: feel, operation, comfortable, functions, prove, actions, risky
FREX: feel, operation, functions, actions, risky, family, nice
Lift: eventually, family, actions, operation, risky, functions, nice
Score: feel, nice, operation, comfortable, prove, record, functions
“I don’t feel that technology has advanced enough for me to feel
comfortable riding in a vehicle that is operating on its own”
“I would much rather leave the vehicle in my own control. I don’t know if
I would feel comfortable completely relying on the vehicle to operate
almost blindly”
“I can’t explain it . . . I just don’t feel comfortable with a car operating on its
own. I like to be in charge of the car”
Note. FREX = Frequency and Exclusivity.
TAbLE 1: (continued)
10 Month XXXX - Human Factors
the survey. The narrow confidence intervals and
the orderly effects indicate that topics are mean-
ingfully associated with different attitudes
toward self-driving vehicles. The topic “Tested
for long time” is most strongly associated with
the rating of “Probably would” and so reflects
the conditional nature of the rating, as compared
with “Definitely would.” Interestingly, the topic
“Safer than human” is most prominent in the
“Definitely would” comments. In contrast,
“Hacking & glitches” is strongly associated with
“Definitely would not” and “Probably would
not.” This graph also shows that the year of the
survey has a relatively small effect, with many
of the lines almost perfectly overlying each
other, but “Self-driving accidents,” “Trust when
mature,” and “Computers make mistakes” devi-
ate from this pattern. Overall, the effect of the
year is small, but the effect of the rating is large.
DISCUSSION
Our overall objective was to use structural
topic modeling to understand trust in self-driving
vehicles. The 13 topics show that this technique
holds promise for analyzing qualitative data
quantitatively. The words and comments most
associated with these topics provide a window
into how people think about self-driving vehicles
that is more nuanced than previous analyses
of similar data (Hulse, Xie, & Galea, 2018;
Kyriakidis, Happee, & De Winter, 2015). The
topics capture the gist of comments in a way
that might otherwise require many hours of hand
coding. Importantly, these topics help explain
what motivates positive and negative attitudes
toward vehicle automation and highlight the
multi-faceted nature of trust and risk as it relates
to self-driving vehicles.
The results support our hypothesis that the
topics reveal aspects of trust and risk that
change over time and underlie drivers’ attitudes
toward self-driving vehicles. Topic prevalence
varied as a function of survey year and Likert-
type ratings. A fatal crash involving a Tesla
vehicle on May of 2016 attracted substantial
media attention and might have contributed to
the less favorable ratings in the 2017 survey
(NTSB, 2017), which might underlie the
Figure 3. Links between topics based on topic correlations. The width of the link
reflects the strength of association, and the size of the nodes reflects the prevalence
of each topic.
TrusT in self-Driving vehicles
11
increase in the prevalence of the topic “Self-
driving accidents.”
Topics reflect both optimism and concern of
drivers. The topic of “Technology improving”
and “Safer than human” reflects optimism that
engineers will address the various challenges fac-
ing self-driving vehicles and that they will be
safer than human drivers. This optimism contrasts
with the topic of “Hacking & glitches,” which
reflects the feeling that reliability and security
problems might plague self-driving vehicles like
they do computers and smartphones.
Dimensions of Trust
The results confirm previously identified
dimensions of trust (Lee & See, 2004): its
purpose, the process underlying its operation
and creation, and its performance. Most topics
relate to the performance dimension of trust and
focus on errors it might make and situations it
might not be able to handle, such as “Safer than
human” and “Self-driving accidents.” Several
topics relate to the process dimension of trust,
but not in the way the dimension has typically
been defined. Instead, topics such as “Technol-
ogy improving” and “Tested for long time”
refer to the process of technology development
and reflect the belief that manufacturers and
regulatory agencies will assure the safety of
vehicle automation. One topic directly touches
on the purpose dimension of trust: “Hacking &
glitches.” Here, the concern is that the automa-
tion may be vulnerable to it being redirected
in a way that is contrary to its purpose and the
drivers’ goals.
Performance, process, and purpose represent
increasing levels of attributional abstraction and
automation infelicities at each level that might
correspond to feelings of disappointment, viola-
tion, and betrayal (McLeod, 2015). Betrayal
reflects a sense that the automation is acting in a
manner that is contrary to the person’s goals, as
Figure 4. The prevalence of each topic. The gray vertical line shows the overall mean prevalence.
12 Month XXXX - Human Factors
when the automation is hacked by a malicious
third party, reflected in the topic “Hacking &
glitches.” A violation occurs when manufactur-
ers or governments fail to uphold social norms
for safety assurance, as when the safety assur-
ance processes are revealed to be lacking or
have been circumvented. The topic “Tested for
long time” reflects such social norms. Disap-
pointment reflects the unpleasant, but to some
degree expected, imperfections of automation
functioning in a complex world—technical risk.
The topics “Computers make mistakes” and
“Errors and failures” reflect such technical risk.
Violation and betrayal reflect increasing degrees
of social risk, which has a greater influence on
trust than technical risk (Fetchenhauer & Dun-
ning, 2012; Molm, Takahashi, & Peterson, 2000;
Slovic, 1999).
Basis of Trust
The construct of trust in this study differs from
that where trust depends on direct experience.
Here, respondents reported anticipated levels of
trust they will have once the technology is avail-
able and had no direct experience to draw upon.
When people lack experience with a system, they
substitute experiences with related systems—
relational trust—as well as expectations regard-
ing government agencies and society—societal
trust (Kelton, Fleischmann, & Wallace, 2008).
Without direct experience with self-driving
vehicles, people rely on analogous experiences
with computers and technology and brands (Hoff
& Bashir, 2015; Lee & See, 2004). For example,
the topic “Computers make mistakes” and “Tech-
nology improving” shows that trust in automated
vehicles is grounded in experiences with other
computer and mechanical systems. This suggests
that people’s response to self-driving vehicles
depends on their experience accumulated through
interactions and associated relationships with
other technology—relational trust (Rousseau,
Sitkin, Burt, & Camerer, 1998). To avoid being
considered as computers that are prone to
glitches, privacy, and security problems, manu-
facturers should stress how different self-driving
Figure 5. The difference in prevalence of each topic and 95th percentile CIs for years
2016 and 2017. Positive values indicate an increase in prevalence from 2016 to 2017.
CI = confidence interval.
TrusT in self-Driving vehicles
13
vehicles are and how different the associated
design process is. For example, manufacturers
should develop safety cases, which make the
logical arguments for safety explicit and ground
these arguments in supporting evidence (Wagner
& Koopman, 2015). Different processes are
needed for self-driving cars, and the differences
need to be communicated to the public.
The lack of direct experience also leads people
to base their trust on more general expectations of
Figure 6. The relative prevalence of each topic and 95th percentile CIs for each rating for the years 2016
and 2017. CI = confidence interval.
14 Month XXXX - Human Factors
institutional and societal response to technology,
social norms of how government agencies assure
safety, and risk measures provided by scientists—
societal trust (Zucker, 1986). This suggests a
potential backlash if safety benefits are framed in
terms of eliminating human error and the associ-
ated 94% of crashes. If “automation errors” arise
and predictions of a crash-free world fail to mate-
rialize, societal trust might suffer. The topic “Trust
when mature” reflects this provisional trust that is
both conditional on experience with the technol-
ogy and the intent of developers of the technol-
ogy. It goes beyond the specific vehicle experi-
ence and is grounded in the brands, manufactur-
ers, and government agencies that will ensure its
testing and prove its safety (Claybrook & Kildare,
2018). The topic “Tested for long time” reflects
comments from some who believe the govern-
ment and companies will ensure safety through a
thorough process testing and certification. Les-
sons learned from the public backlash to geneti-
cally modified food suggests that presenting data
is not sufficient to garner public acceptance.
Acceptance depends, in part, on building societal
trust through institutional transparency and inte-
grating public concerns into policies (Frewer
et al., 2004). Perhaps, most importantly, these
practices and policies must actually be trustwor-
thy so that the technology provides the advertised
safety benefits.
Trust, Control, and Dread Risk
The topic “Hacking & glitches” together with
“Control until proven” and “Feel uncomfort-
able,” all suggest that people worry about losing
control of their vehicles, succinctly captured in
one consumer’s comment, “I want total control
of my vehicle.” Two dimensions of hazardous
situations that affect perceived risk: whether it
is controlled and limited in its consequences,
and whether it is knowable and observable
(Slovic, 1987). Uncontrollable, consequential,
and unobservable risks constitute dread risk.
Nuclear reactor accidents and terrorist attacks
are uncontrolled and unobserved and are per-
ceived as dread risk. People perceive dread risks
as 1,000 times riskier than known and control-
lable risks (Slovic, 1987), and so dread risk can
disproportionately affect policy and behavior.
Four interacting effects could contribute to
diminished trust and the emergence of dread risk
(Slovic, 1999). First, events that undermine trust
are more salient than those that promote it, as in
the overreaction to spectacular crashes caused
by self-driving vehicles compared with crashes
avoided (Shariff, Bonnefon, & Rahwan, 2017).
Second, when positive events are noticed, they
are weighted less than negative events. Third,
stories about negative events are viewed as more
credible than those about positive events. Fourth,
diminished trust makes it less likely people will
use a self-driving vehicle and experience posi-
tive events that might help trust recover. Because
of these factors, distrust that develops before
people experience a system can persist even
after they experience the system.
Figure 7 integrates some of the factors that
might change the perception of driving risk from
one that is controlled and known, to something
closer to dread risk, which can undermine accep-
tance of automated vehicles. As this figure sug-
gests, trust plays a central role in risk perception,
Figure 7. The relationship between trust bases and dimensions and the effect of trust on perceived risk
and acceptance.
TrusT in self-Driving vehicles
15
particularly in the absence of direct experience
(Earle, Siegrist, & Gutscher, 2010). It also sug-
gests that the basis of trust in early stages of
deployment will be societal and relational and
that betrayal and violation from these sources
could be very damaging for trust (Frewer et al.,
2004).
The link from trust to the performance dimen-
sion of trust indicates that trust can influence the
selection and interpretation of evidence (Hors-
burgh, 1960; Slovic, 1999). Lower trust makes it
more likely that negative performance will be
observed and that positive information will be
interpreted negatively, undermining trust and
further increasing the tendency to monitor the
automation for evidence of poor performance.
The recursive nature of this feedback loop means
that the influence of trust on acceptance affects
the experiential basis of trust, leading to either
vicious or virtuous cycles. In a virtuous cycle,
increased trust can lead to increased use, which
can lead to further use and greater trust (Lee &
See, 2004). When considering how safe is safe
enough for self-driving vehicles, these results
suggest that it may be important to not just
increase safety but also communicate those
achievements in a way that leads to appropriate
trust.
Limitations and Generalization
This analysis has substantial limitations that
should temper its interpretation. First, the sur-
vey sampled drivers who had not experienced
self-driving vehicles and so they were forced
to imagine that experience. With dramatically
transformative technology, the availability bias
makes it likely that they will be more influenced
by the crashes that have occurred with various
types of automated vehicles than the new expe-
riences self-driving vehicles might afford.
Beyond this fundamental limitation, text anal-
ysis and structural topic modeling do not reveal a
single definitive interpretation of the comments.
Selecting a model with a different number of top-
ics might lead to different insights. Likewise, the
specific preprocessing of the text such as stem-
ming, spellchecking, term aggregation, and stop
word elimination all affect the outcome of the
analysis. For example, comments such as “don’t
trust” and “trust a lot” might both be reduced to
“trust” depending on which stop word dictionary
was applied to the data set, which would have
obvious implications for how a topic associated
with trust might be associated with negative and
positive Likert-type ratings. Although structural
topic modeling provides a quantitative method to
analyze qualitative data, it contains many subjec-
tive elements that merit attention, just as they do
with qualitative methods.
Generalizing from data collected from those
who have not experienced a self-driving vehicle
is challenging because the basis of trust might
change with direct experience. Different driver
demographic groups might have different rea-
sons why they trust or distrust automation, and
so, the results of this study might not generalize
uniformly over the population. Such possibili-
ties could be explored with further topic model
analysis. Responding to a survey forces people
to imagine possibilities. In contrast, using an
actual self-driving vehicle might reveal unimag-
ined possibilities and would provide a visceral
experience that cannot be imagined. More gen-
erally, the overall assumption that trust guides
behavior merits questioning. Most conceptual-
izations of trust and acceptance assume that
attitudes guide behavior (Davis, 1993; Gefen,
Karahanna, & Straub, 2003). Trust can be moti-
vated and rationalized, resulting in end-directed
trust rather than truth-directed trust that is
grounded in the trustworthiness of the system
(McLeod, 2015). The practical benefits experi-
enced with the technology can outweigh the
need for truth-directed trust. Consequently, end-
directed trust can emerge as behavior guides
attitudes, which can occur when people align
their attitudes with behavior to mitigate cogni-
tive dissonance (Ghazizadeh, Lee, & Boyle,
2012; Sharot, Velasquez, & Dolan, 2010). If
automation fills a compelling need and people
find themselves using it, their attitudes may
shift to align with their behavior.
As people directly experience self-driving
vehicles, new challenges to trust may emerge
that were not expressed by the survey respon-
dents. Partially automated vehicles exist today,
but fully automated self-driving vehicles that
operate on all roads and in all weather condi-
tions may never exist. Self-driving vehicles will
have a limited operating domain that may lead to
16 Month XXXX - Human Factors
“availability anxiety,” similar to range anxiety
expressed with electric vehicles. Because trust is
often specific and contextually grounded—peo-
ple trust system X to do Y in context Z—it is
unclear how experiential, relational, and societal
trust will mediate specific trust situations. Like-
wise, until survey respondents actually experi-
ence self-driving vehicles, they might pay rela-
tively little attention to availability, privacy, and
fairness, relative to the more salient issues of
safety (Kaur & Rampersad, 2018). For example,
the algorithms for sending a vehicle to pick up a
passenger might be unfair and biased toward
certain socioeconomic groups, similar to the
biases that have emerged in Uber’s ride request
algorithms (Hanrahan, Ma, & Yuan, 2017), and
algorithms more generally (Courtland, 2018).
CONCLUSION
Structural topic modeling reveals reasons
underlying drivers’ ratings of vehicle automa-
tion, which align with factors typically associ-
ated with dimensions of automation trust and
trustworthiness. The analysis reveals that the
basis of trust differs when the automation has
not been directly experienced, leading to a focus
on societal and relational bases rather than the
more typically studied experiential basis. A
particularly important finding concerns whether
automation infelicities are viewed as disap-
pointments, violations, or betrayals. Mundane
disappointment might annoy, troubling privacy
and safety violations might spoil a brand, and
betrayal associated with systemic failure to
assure safety and hacking might undermine
the success of the technology. This and related
research suggest that violations and betrayals
might have substantially greater consequences
than simply disappointing performance. Given
the limits of the current technology and the sur-
vey, the analysis informs the debate concerning
how safe is safe enough for automated vehicles
and provides initial indicators of what makes
such vehicles feel safe and trusted.
ACKNOWLEDGMENTS
Comments from the Cognitive Systems Labora-
tory members, and particularly A. Dinparastdjadid
and E. Chiou, greatly improved the manuscript.
KEY POINTS
Structural topic modeling provides a window into
what guides people’s ratings of trust.
The comments suggest concerns that reflect dread
risk and the associated possibility that people
might overestimate the risk of self-driving vehi-
cles.
When technology has not been directly experi-
enced, societal and relational bases guide trust
rather than the more typically studied experiential
basis.
Automation infelicities can be viewed as disap-
pointments, violations, or betrayals, with viola-
tions and betrayals being more damaging and
more common with societal and retaliation bases
of trust.
ORCID iD
John D. Lee https://orcid.org/0000-0001-
9808-2160
REFERENCES
Beggiato, M., & Krems, J. F. (2013). The evolution of mental
model, trust and acceptance of adaptive cruise control in rela-
tion to initial information. Transportation Research Part F:
Traffic Psychology and Behaviour, 18, 47–57. doi:10.1016/j
.trf.2012.12.006
Bellem, H., Thiel, B., Schrauf, M., & Krems, J. F. (2018). Comfort
in automated driving: An analysis of preferences for different
automated driving styles and their dependence on personality
traits. Transportation Research Part F: Traffic Psychology and
Behaviour, 55, 90–100. doi:10.1016/j.trf.2018.02.036
Blei, D. M. (2012). Probabilistic topic models. Communication of
the ACM, 55(4), 77–84. doi:10.1109/MSP.2010.938079
Bürkner, P. (2017). brms: An R package for Bayesian multilevel
models using Stan. Journal of Statistical Software, 80(1),
1–28. doi:10.18637/jss.v080.i01
Choi, J. K., & Ji, Y. G. (2015). Investigating the importance of trust
on adopting an autonomous vehicle. International Journal of
Human-Computer Interaction, 31, 692–702. doi:10.1080/104
47318.2015.1070549
Chuang, J., Manning, C. D., & Heer, J. (2012). Termite: Visual-
ization techniques for assessing textual topic models. In Pro-
ceedings of the international working conference on advanced
visual interfaces—AVI (pp. 74–77). New York: Association for
Computing Machinery.
Claybrook, J., & Kildare, S. (2018). Autonomous vehicles: No
driver. . . . no regulation? Science, 361, 36–37. doi:10.1126/
SCIENCE.AAU2715
Courtland, R. (2018). The bias detectives. Nature, 588, 357–360.
doi:10.1016/j.jbusres.2015.01.061
Davis, F. D. (1993). User acceptance of information technol-
ogy: System characteristics, user perceptions and behavioral
impacts. International Journal of Man-Machine Studies, 38,
475–487.
TrusT in self-Driving vehicles
17
Dumais, S. T., & Landauer, T. K. (1997). A solution to Plato’s prob-
lem: The latent semantic analysis theory of acquisition, induc-
tion and representation of knowledge. Psychological Review,
104, 211–240.
Earle, T. C., Siegrist, M., & Gutscher, H. (2010). Trust, risk percep-
tion and the TCC model of cooperation. In M. Siegrist, T. C.
Earle, & H. Gutscher (Eds.), Trust in risk management: Uncer-
tainty and scepticism in the public mind (pp. 1–50). London,
England: Earthscan.
Fetchenhauer, D., & Dunning, D. (2012). Betrayal aversion versus
principled trustfulness—How to explain risk avoidance and
risky choices in trust games. Journal of Economic Behavior and
Organization, 81, 534–541. doi:10.1016/j.jebo.2011.07.017
Frewer, L., Lassen, J., Kettlitz, B., Scholderer, J., Beekman, V.,
& Berdal, K. G. (2004). Societal aspects of genetically modi-
fied foods. Food and Chemical Toxicology, 42, 1181–1193.
doi:10.1016/j.fct.2004.02.002
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM
in online shopping: An integrated model. MIS Quarterly, 27,
51–90.
Ghazizadeh, M., Lee, J. D., & Boyle, L. N. (2012). Extending the
technology acceptance model to assess automation. Cognition,
Technology & Work, 14, 39–49. doi:10.1007/s10111-011-0194-3
Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic
accidents. Psychological Science, 15, 286–287. doi:10.1111/
j.0956-7976.2004.00668.x
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de
Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of fac-
tors affecting trust in human-robot interaction. Human Factors,
53, 517–527. doi:10.1177/0018720811417254
Hanrahan, B. V., Ma, N. F., & Yuan, C. W. (2017). The roots of
bias on Uber. In Proceedings of 15th European conference
on computer-supported cooperative work (pp. 1–17). Copen-
hagen, Denmark: European Society for Socially Embedded
Technologies.
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating
empirical evidence on factors that influence trust. Human Fac-
tors, 57, 407–434. doi:10.1177/0018720814547570
Hoffman, R. R., Lee, J. D., Woods, D. D., Shadbolt, N., Miller, J.,
& Bradshaw, J. M. (2009). The dynamics of trust in cyberdo-
mains. IEEE Intelligent Systems, 24(6), 5–11.
Horsburgh, H. J. N. (1960). The ethics of trust. The Philosophical
Quarterly, 10, 343–354.
Hulse, L. M., Xie, H., & Galea, E. R. (2018). Perceptions of auton-
omous vehicles: Relationships with road users, risk, gender and
age. Safety Science, 102, 1–13. doi:10.1016/j.ssci.2017.10.001
Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the
American Statistical Association, 90, 773–795.
Kaur, K., & Rampersad, G. (2018). Trust in driverless cars: Inves-
tigating key factors influencing the adoption of driverless cars.
Journal of Engineering and Technology Management, 48,
87–96. doi:10.1016/j.jengtecman.2018.04.006
Kelton, K., Fleischmann, K. R., & Wallace, W. A. (2008). Trust in
digital information. Journal of the American Society for Infor-
mation Science, 59, 363–374. doi:10.1002/asi.20722
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L. J., & Nass, C.
(2015). Why did my car just do that? Explaining semi-auton-
omous driving actions to improve driver understanding, trust,
and performance. International Journal on Interactive Design
and Manufacturing: IJIDeM, 9, 269–275. doi:10.1007/s12008-
014-0227-2
Kyriakidis, M., Happee, R., & De Winter, J. C. F. (2015). Pub-
lic opinion on automated driving: Results of an international
questionnaire among 5000 respondents. Transportation
Research Part F: Traffic Psychology and Behaviour, 32, 127–
140. doi:10.1016/j.trf.2015.04.014
Lee, J. D., & Kolodge, K. (2018). Understanding attitudes towards
self-driving vehicles: Quantitative analysis of qualitative data.
Proceedings of the Human and Ergonomics Society Annual
Meeting, 62, 1399–1403.
Lee, J. D., Liu, S.-Y., Domeyer, J., & Dinparastdjadid, A. (in press).
Assessing driver acceptance of fully automated vehicle with
a two-part model of intervention tendency and magnitude.
Human Factors.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for
appropriate reliance. Human Factors, 46, 50–80.
Mays, N., & Pope, C. (2000). Qualitative research in health care:
Assessing quality in qualitative research. British Medical Jour-
nal, 320, 50–52. doi:10.1136/bmj.320.7226.50
McLeod, C. (2015). Stanford encyclopedia of philosophy: Trust.
Center for the Study of Language and Information, Stanford
University. Retrieved from https://plato.stanford.edu/entries/
trust/
Molm, L. D., Takahashi, N., & Peterson, G. (2000). Risk and trust
in social exchange: An experimental test of a classical proposi-
tion. American Journal of Sociology, 105, 1396–1427.
National Transportation Safety Board. (2017). Collision between
a car operating with automated vehicle control systems and a
tractor-semitrailer truck near Williston, Florida, May 7, 2016.
Retrieved from https://www.ntsb.gov/investigations/Acciden-
tReports/Reports/HAR1702.pdf
Nordhoff, S., de Winter, J., Kyriakidis, M., van Arem, B., & Hap-
pee, R. (2018). Acceptance of driverless vehicles: Results from
a large cross-national questionnaire study. Journal of Advanced
Transportation, 2018, 5382192. doi:10.1155/2018/5382192
Price, M. A., Venkatraman, V., Gibson, M. C., Lee, J. D., & Mutlu,
B. (2016). Psychophysics of trust in vehicle control algorithms
(SAE technical paper). doi:10.4271/2016-01-0144
Roberts, M. E., Stewart, B. M., & Airoldi, E. M. (2016). A model of
text for experimentation in the social sciences. Journal of the
American Statistical Association, 111, 988–1003. doi:10.1080/
01621459.2016.1141684
Roberts, M. E., Stewart, B. M., Tingley, D., Lucas, C., Leder-Luis,
J., Gadarian, S. K., & Rand, D. G. (2014). Structural topic
models for open-ended survey responses. American Journal of
Political Science, 58, 1064–1082. doi:10.1111/ajps.12103
Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998).
Not so different after all: A cross-discipline view of trust.
Academy of Management Review, 23, 393–404.
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological
roadblocks to the adoption of self-driving vehicles. Nature
Human Behaviour, 1, 694–696. doi:10.1038/s41562-017-
0202-6
Sharot, T., Velasquez, C. M., & Dolan, R. J. (2010). Do decisions
shape preference? Evidence from blind choice. Psychological
Science, 21, 1231–1235. doi:10.1177/0956797610379235
Silge, J., & Robinson, D. (2016). tidytext: Text mining and analysis
using tidy data principles in R. The Journal of Open Source
Software, 1(3), 37. doi:10.21105/joss.00037
Slovic, P. (1987). Perception of risk. Science, 236, 280–285.
Slovic, P. (1999). Trust, emotion, sex, politics, and science: Survey-
ing the risk-assessment battlefield. Risk Analysis, 19, 689–701.
18 Month XXXX - Human Factors
Sunstein, C. R., & Zeckhauser, R. (2010). Dreadful possibilities,
neglected probabilities. In E. Michel-Kerjan & P. Slovic (Eds.),
The irrational economist: Making decisions in a dangerous
world (pp. 116–123), New York, NY: Public Affairs Press.
Verberne, F. M. F., Ham, J., & Midden, C. J. H. (2015). Trusting a
virtual driver that looks, acts, and thinks like you. Human Fac-
tors, 57, 895–909. doi:10.1177/0018720815580749
Wagner, M., & Koopman, P. (2015). A philosophy for developing
trust in self-driving cars. Road Vehicle Automation, 2, 163–
171. doi:10.1007/978-3-319-19078-5_14
Wasserstein, R. L., & Lazar, N. A. (2016). The ASA statement on
p-values: Context, process, and purpose. The American Statis-
tician, 70, 129–133. doi:10.1080/00031305.2016.1154108
Wickham, H. (2016). ggplot2: Elegant graphics for data analysis.
New York, NY: Springer.
Zucker, L. G. (1986). Production of trust: Institutional sources of
economic structure, 1840-1920. Research in Organizational
Behavior, 8, 53–111.
John D. Lee is the Emerson Electric Professor in the
Department of Industrial and Systems Engineering
at the University of Wisconsin-Madison. He gradu-
ated in 1992 with a PhD in Mechanical Engineering
from the University of Illinois.
Kristin Kolodge is the Executive Director, Driver
Interaction and HMI, at J.D. Power. She graduated in
2000 with an MS in Engineering Management from
the University of Michigan.
Date received: January 18, 2019
Date accepted: August 2, 2019
... Trust is associated with not only automation use but also other aspects, such as reducing workload (Wickens & Hollands, 2000), acceptance of automation (Ghazizadeh, Lee, & Boyle, 2012), and traffic safety (Payre, Cestac, & Delhomme, 2016). This indicates that establishing trust is a key element for the successful acceptance of driving automation technology (Lee & Kolodge, 2019). One possible way to foster users' levels of trust in automation is a continuous interaction with automation in supervisory control (Muir and Moray, 1996). ...
Article
To prompt the use of driving automation in an appropriate and safe manner, system designers require knowledge about the dynamics of driver trust. To enhance this knowledge, this study manipulated prior information of a partial driving automation into two types (detailed and less) and investigated the effects of the information on the development of trust with respect to three trust attributions proposed by Muir (1994): predictability, dependability, and faith. Furthermore, a driving simulator generated two types of automation failures (limitation and malfunction), and at six instances during the study, 56 drivers completed questionnaires about their levels of trust in the automation. Statistical analysis found that trust ratings of automation steadily increased with the experience of simulation regardless of the drivers’ levels of knowledge. Automation failure led to a temporary decrease in trust ratings; however, the trust was rebuilt by a subsequent experience of flawless automation. Results showed that dependability was the most dominant belief of drivers’ trust throughout the whole experiment, regardless of their knowledge level. Interestingly, detailed analysis indicated that trust can be accounted by different attributions depending on the drivers’ circumstances: the subsequent experience of error-free automation after the exposure to automation failure led predictability to be a secondary predictive attribution of drivers’ trust in the detailed group whilst faith was consistently the secondary contributor to shaping trust in the less group throughout the experiment. These findings have implications for system design regarding transparency and for training methods and instruction aimed at improving driving safety in traffic environments with automated vehicles.
... Choi et al. revealed that trust and perceived usefulness strongly affect the user's desire to use autonomous vehicles [6]. In particular, trust is a widely adopted metric for the level of acceptance of autonomous systems [6,12,13,20,29]. They also suggest that system transparency, technical competence, and situation management can positively impact trust, and can therefore indirectly influence the adoption and acceptance of autonomous vehicle. ...
Preprint
The behavior of self driving cars may differ from people expectations, (e.g. an autopilot may unexpectedly relinquish control). This expectation mismatch can cause potential and existing users to distrust self driving technology and can increase the likelihood of accidents. We propose a simple but effective framework, AutoPreview, to enable consumers to preview a target autopilot potential actions in the real world driving context before deployment. For a given target autopilot, we design a delegate policy that replicates the target autopilot behavior with explainable action representations, which can then be queried online for comparison and to build an accurate mental model. To demonstrate its practicality, we present a prototype of AutoPreview integrated with the CARLA simulator along with two potential use cases of the framework. We conduct a pilot study to investigate whether or not AutoPreview provides deeper understanding about autopilot behavior when experiencing a new autopilot policy for the first time. Our results suggest that the AutoPreview method helps users understand autopilot behavior in terms of driving style comprehension, deployment preference, and exact action timing prediction.
Article
Full-text available
Explaining the phenomenon of declining acceptance of automated driving technology (ADT) and predicting trends in acceptance has become an important area of research. To explore the reasons for the decline in acceptance of automated vehicles and how to improve user acceptance, we studied mechanisms of the influence process from the relationship between safety riskiness of ADT and user acceptance, and examined the mediating and moderating effects of the proposed intervention behaviors on the influence relationship between these two. First, an improved acceptance model incorporating safety risk factors was developed. Subsequently, the psychological change process of user acceptance was analyzed based on people’s response to accident information. Ultimately, the results show that safety cognition risk regarding ADT has a significant negative impact on user acceptance. Next, the mediating model where user experience was introduced as a moderating variable was designed. From the test results of this model, it is found that the proposed behavioral intervention strategy is effective in attenuating the degree of impact of the safety riskiness of ADT on acceptance. The risk-based acceptance explanation model and intervention method designed in this study provide a scientific basis and practical approach to develop the market for automated vehicles.
Article
Full-text available
Objective This study examines how driving styles of fully automated vehicles affect drivers’ trust using a statistical technique—the two-part mixed model—that considers the frequency and magnitude of drivers’ interventions. Background Adoption of fully automated vehicles depends on how people accept and trust them, and the vehicle’s driving style might have an important influence. Method A driving simulator experiment exposed participants to a fully automated vehicle with three driving styles (aggressive, moderate, and conservative) across four intersection types (with and without a stop sign and with and without crossing path traffic). Drivers indicated their dissatisfaction with the automation by depressing the brake or accelerator pedals. A two-part mixed model examined how automation style, intersection type, and the distance between the automation’s driving style and the person’s driving style affected the frequency and magnitude of their pedal depression. Results The conservative automated driving style increased the frequency and magnitude of accelerator pedal inputs; conversely, the aggressive style increased the frequency and magnitude of brake pedal inputs. The two-part mixed model showed a similar pattern for the factors influencing driver response, but the distance between driving styles affected how often the brake pedal was pressed, but it had little effect on how much it was pressed. Conclusion Eliciting brake and accelerator pedal responses provides a temporally precise indicator of drivers’ trust of automated driving styles, and the two-part model considers both the discrete and continuous characteristics of this indicator. Application We offer a measure and method for assessing driving styles.
Article
Full-text available
Driverless cars are seen as one of the key disruptors in the next technology revolution. However, the main barrier to adoption is the lack of public trust. The purpose of this study is to investigate the key factors influencing the adoption of driverless cars. Drawing on quantitative evidence, the study found that the ability of the driverless car to meet performance expectations and its reliability were important adoption determinants. Significant concerns included privacy (autonomy, location tracking and surveillance) and security (from hackers). The paper provides implications for firms developing the next generation of car features and early implementation sites.
Article
Full-text available
Shuttles that operate without an onboard driver are currently being developed and tested in various projects worldwide. However, there is a paucity of knowledge on the determinants of acceptance of driverless shuttles in large cross-national samples. In the present study, we surveyed 10,000 respondents on the acceptance of driverless vehicles and sociodemographic characteristics, using a 94-item online questionnaire. After data filtering, data of 7,755 respondents from 116 countries were retained. Respondents reported that they would enjoy taking a ride in a driverless vehicle (mean = 4.90 on a scale from 1 = disagree strongly to 6 = agree strongly). We further found that the scores on the questionnaire items were most appropriately explained through a general acceptance component, which had loadings of about 0.7 for items pertaining to the usefulness of driverless vehicles and loadings between 0.5 and 0.6 for items concerning the intention to use, ease of use, pleasure, and trust in driverless vehicles, as well as knowledge of mobility-related developments. Additional components were identified as thrill seeking, wanting to be in control manually, supporting a car-free environment, and being comfortable with technology. Correlations between sociodemographic characteristics and general acceptance scores were small (<0.20), yet interpretable (e.g., people who reported difficulty with finding a parking space were more accepting towards driverless vehicles). Finally, we found that the GDP per capita of the respondents’ country was predictive of countries’ mean general acceptance score ( ρ=-0.48 across 43 countries with 25 or more respondents). In conclusion, self-reported acceptance of driverless vehicles is more strongly determined by domain-specific attitudes than by sociodemographic characteristics. We recommend further research, using objective measures, into the hypothesis that national characteristics are a predictor of the acceptance of driverless vehicles.
Article
Full-text available
p>Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.</p
Article
Full-text available
The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions and link functions are supported, allowing users to fit – among others – linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multilevel context. Further modeling options include autocorrelation of the response variable, user defined covariance structures, censored data, as well as meta-analytic standard errors. Prior specifications are flexible and explicitly encourage users to apply prior distributions that actually reflect their beliefs. In addition, model fit can easily be assessed and compared with the Watanabe-Akaike information criterion and leave-one-out cross-validation.
Article
Driverless cars are on the road with no federal regulation, and the public is paying the price
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
As technical realization of highly and fully automated vehicles draws closer, attention is being shifted from sheer feasibility to the question of how an acceptable driving style and thus comfort can be implemented. It is increasingly important to determine, how highly automated vehicles should drive to ensure driving comfort for the now passive drivers. Thus far, only little research has been conducted to examine this issue. In order to lay a basis on how automated vehicles should drive to ensure passenger comfort, different variations of three central maneuvers were rated and analyzed. A simulator study (N = 72) was conducted in order to identify comfortable driving strategies. Three variations of lane changes, accelerations and decelerations were configured by manipulating acceleration and jerk, and thus the course of each maneuver. Furthermore, the influence of personality traits and self-reported driving style on preferences of differently executed automated maneuvers was analyzed. Results suggest keeping acceleration and jerk as small as possible for acceleration maneuvers. For lane changes, both small accelerations as well as an early motion feedback are advisable. Interestingly, decelerating as a manual driver would is rejected compared to two artificial alternatives. Moreover, no influence of personality traits on maneuver preference was found. Only self-reported driving style had a marginal effect on participants' preferences. In conclusion, a recommendation for an automated driving style can be given, which was perceived as comfortable by participants regardless of their personality.
Article
Fully automated self-driving cars, with expected benefits including improved road safety, are closer to becoming a reality. Thus, attention has turned to gauging public perceptions of these autonomous vehicles. To date, surveys have focused on the public as potential passengers of autonomous cars, overlooking other road users who would interact with them. Comparisons with perceptions of other existing vehicles are also lacking. This study surveyed almost 1000 participants on their perceptions, particularly with regards to safety and acceptance of autonomous vehicles. Overall, results revealed that autonomous cars were perceived as a “somewhat low risk“ form of transport and, while concerns existed, there was little opposition to the prospect of their use on public roads. However, compared to human-operated cars, autonomous cars were perceived differently depending on the road user perspective: more risky when a passenger yet less risky when a pedestrian. Autonomous cars were also perceived as more risky than existing autonomous trains. Gender, age and risk-taking had varied relationships with the perceived risk of different vehicle types and general attitudes towards autonomous cars. For instance, males and younger adults displayed greater acceptance. Whilst their adoption of this autonomous technology would seem societally beneficial – due to these groups’ greater propensity for taking road user risks, behaviours linked with poorer road safety – other results suggested it might be premature to draw conclusions on risk-taking and user acceptance. Future studies should therefore continue to investigate people’s perceptions from multiple perspectives, taking into account various road user viewpoints and individual characteristics.