Content uploaded by Philipp Wintersberger
Author content
All content in this area was uploaded by Philipp Wintersberger on Oct 12, 2017
Content may be subject to copyright.
A Framework for Analyzing and
Calibrating Trust in Automated
Vehicles
Abstract
When predicting the traffic of the future and the
acceptance of automated vehicles, we often like to
assume that one of the major challenges will be to
foster overall trust in automated vehicles for effective
and safe mixed-traffic operations. In this paper, we
propose a more faceted viewpoint and argue for the
benefits of – and provide an initial framework for –
calibrated trust by fostering trust and distrust in
automated vehicles. If drivers know exactly what their
vehicle is and is not capable of, then they are more
likely to react properly and be prepared when handover
requests or other unexpected circumstances might
occur.
Author Keywords
Accessible Computing; Assistive Technology;
Automated Vehicles.
ACM Classification Keywords
H.5.2. [Information interfaces and presentation (e.g.,
HCI)]: User Interfaces; J.4 [Computer Application]:
Social and Behavioral Sciences–Psychology.
Introduction and Empirical Background
In contrast to today’s highly complex automated
systems, which are mainly controlled by experts
(airplanes, power plants, etc.), the operators of the
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for third-
party components of this work must be honored. For all other uses, contact
the Owner/Author.
Copyright is held by the owner/author(s).
Automotive'UI 16 Adjunct, October 24-26, 2016, Ann Arbor, MI, USA
ACM 978-1-4503-4654-2/16/10.
http://dx.doi.org/10.1145/3004323.3004326
Alexander G. Mirnig
Center for Human-Computer
Interaction
Christian Doppler Laboratory
“Contextual Interfaces” &
Department of Computer Sciences
University of Salzburg
5020 Salzburg, Austria
firstname.lastname@sbg.ac.at
Philipp Wintersberger
CARISSMA
University of Applied Sciences
Ingolstadt
85049 Ingolstadt, Germany
philipp.wintersberger@carissma.eu
Christine Sutter
Ergonomics & System Design
Institute of Ergonomics & Human
Factors
Mechanical Engineering
Technische Universität Darmstadt
64287 Darmstadt, Germany
c.sutter@iad.tu-darmstadt.de
Jürgen Ziegler
Interactive Systems Group
University of Duisburg-Essen
47057 Duisburg, Germany
juergen.ziegler@uni-due.de
Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’16), October
24–26, 2016, Ann Arbor, MI, US.
33
future will be the much more diverse class of
consumers. From a Human Factors perspective, trust in
technical systems and their trustworthiness is a major
challenge, and a “... paradox facing auto engineers:
how to design self-driving cars that feel trustworthy
while simultaneously reminding their occupants that, no
matter how pristine a given model’s safety record, no
driver - human or artificial - is perfect. How, in other
words, to free drivers from the onus of driving, while
burdening them with the worry that, at any moment,
they will need to take back control.” [1]. The quote
refers to semi-automated vehicles at SAE-Level 3
(where the driver will be occasionally and at short
notice required to take over to ensure safety) or lower
levels of automation [2]. Semi-automated vehicles at
Level 3 might be among the first automated systems
available for the public that enable drivers to be in
charge of complex computer systems in highly safety-
critical environments. The world’s first fatality in semi-
automated driving (accident with Tesla’s Autopilot
system on May 7th. 2016) can be seen as a trust issue
(overtrust in this case), as the driver was maybe not
aware of the system’s limits and/or did not monitor the
environment carefully enough in order to compensate
for the system’s failure. This is already an issue in
lower levels of automation. Dickie and Boyle [3] could
show that a large group of drivers were not aware of
the boundaries of their adaptive cruise control (ACC)
system, and used it in situations for which it was not
suitable. Trust is one of the most critical and widely
debated factors for user acceptance of automated
functions in vehicles, ranging from more restricted
driver assistance (in Level 0 to 3 vehicles) to fully
automated vehicles (Level 5). The factors influencing
users’ trust in automated vehicles cover a wide range
of aspects, not all of them related to the vehicle itself
and the reliability and safety of its operation, such as
the credibility of the engineers who built the car [4].
In accordance with Ekman [5] and de Visser [6] trust
can be defined as follows: Trust is a relation between
at least two agents (trustor and trustee, see Clases [7])
expressing an expectation that an agent (trustor) will
help achieve another agent’s goals (trustee) in a
situation characterized by uncertainty and vulnerability.
Depending on the agents participating in a trust
relation, we can distinguish between user-user trust
(also known as interpersonal trust), user-system trust,
and system-system trust. Overtrust occurs when the
expectation in the other agent is higher than that
agent’s capabilities to help achieve the respective goal,
and undertrust occurs when the expectation in the
other agent is lower than that agent’s capabilities to
help achieve the respective goal. A trust relation, in
which neither over- nor undertrust occurs is called
calibrated trust. Accordingly, trust calibration is the
process of balancing user trust to the required level.
Apart from that, distrust is a relation between at least
two agents expressing an expectation that an agent will
not (or not to a sufficient degree) help achieve another
agent’s goals in a situation characterized by uncertainty
and vulnerability. In the present paper, we focus on
user-system trust and its application to vehicle users of
semi- and fully automated vehicles.
Related Work
To define user-system trust, Lee and See [8] adapted
the interpersonal trust definition by Mayer and
Schoorman [9] (as summarized by Ekman et al. [6]):
“trust is built on the possibility to observe the system’s
behavior (performance), understand the intended use
of the system (purpose), as well as understand how it
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
34
makes decisions (process)”. Since Paul Fitts’ [10]
suggestion that humans are poor at monitoring
automated systems (HABA-MABA principle), the idea
was to get humans completely out of the loop, until
Norman [11] stated that many prominent catastrophes
were more the result of missing or inappropriate
feedback rather than pure human error. Parasuraman
and Riley [12] introduced the terms use, misuse and
disuse as trust-based modes of interaction with
automated systems and Muir [13] called the process of
establishing appropriate trust “calibration of trust”.
Ekman et al. [5] presented a theoretical framework for
trust in vehicles and stated, that “a holistic approach is
necessary, as trust development starts long before a
user’s first contact with the system and continues long
thereafter”, and de Visser et al. tried to propose a
framework for trust cue calibration from a more
abstract view [6].
Motivation
Trust in automated systems is a multifaceted and hard
to predict phenomenon. In automated driving, we often
strive towards acceptance and trustworthiness
regarding automated vehicles. However, like any
system, automated vehicles are fallible – a trait, which
they share with humans. We consider the increasing
automation of vehicles a cooperative context, where
both agents have certain strengths and weaknesses
and one compensates for the other. This entails
actually knowing what those weaknesses are and not
blindly trusting in capabilities that might not exist.
Thus, there are situations, where distrust in automation
is just as valuable as trust – it all depends on the task
and its context and is not necessarily contrary to a
disposition of overall trust in the automated system.
For a better understanding of user-system trust we see
a need for a more systematic analysis of trust-related
factors which can help designers and developers to
understand the issues and to explore the design space
of potential trust-supporting functions in a more
structured way. In the following, we introduce a
framework that aims to provide such an analysis and
design space by defining different levels of function
automation as well as different levels of information
processing required for a successful user-system-
environment interaction. We specifically want to
contrast undertrust with distrust. Undertrust implies a
non-symmetric expectation in an agent’s capabilities.
Distrust does not make such an implication, as distrust
can be warranted. This should allow for a more detailed
analysis of driving tasks and decisions regarding the
capabilities of executing each subtask. We can then
design for calibrated trust via specifically targeted HMI
design solutions by evoking either trust or distrust, so
that neither over- nor undertrust occurs.
Initial Framework
Starting with the functional levels, the operational level
is the lowest level of driving functions and includes
individual and elementary driving tasks, such as
braking, turning the wheel, checking the mirror(s), etc.
The tactical level is the middle level of driving
functions. Tasks on the tactical level are usually
referred to as maneuvers and consist of several
operational tasks. The strategic level is the highest
level reserved for overall tasks. The most common
example of a driving task on the strategic level is route
planning. For modelling the interaction between user,
system and environment we follow a bottom-up
approach. The perceptual level (“perceive”) is the
lowest, and comprises just the registration and
Framework Overview
Levels of function
(x-axis): operational,
tactical, strategic (from
lowest to highest).
Levels of Information
Processing
(y-axis: perceive,
understand, predict, adapt.
Trust calibration: Each
driving task can be split up
into subtasks via the
framework. Depending on
whether the human or the
machine are better suited to
complete a certain subtask,
HMI design should either
foster trust (in case the
machine is more capable) or
distrust (in case the human is
more capable). When the
expectations regarding
achieving the respective goal
between trustor and trustee
match, a state of calibrated
trust is achieved.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
35
(bottom-up) transfer of sensory information for further
processing. Information processing for a successful
user-system-environment interaction can either aim to
explain (“understand”) certain actions or events, to
anticipate (“predict”) certain outcomes or
consequences, or to “adapt” to changes in the user-
system-environment relationship. In their design
methodology for trust cue calibration, de Visser et al.
[6] pursued a slightly different approach using
perception, comprehension, projection, decision, and
execution. In our framework, we introduce adaptation
as a third challenge in the user-system-environment
interaction. Adaptation refers to the flexibility of an
information processing system to rapidly compensate
for and adapt to changes. The framework works in
three steps: In step one, individual vehicle tasks are
split up into parts via functional levels on the y-axis
and stages of information processing on the x-axis. This
covers the system side of the framework. In step two,
each individual driving task part from step one is then
matched to trust-relevant factors for the human
individual. This covers the human side of the
framework. In step three, the trust relevant factors can
then be targeted individually with HMI design solutions,
depending on whether trust or distrust in the vehicle’s
capabilities regarding each subtask should be fostered
in an implementation.
Example: Overtaking
In Table 1, we provide an example of how a driving
task – here overtaking on a rural road – can be split up
into subtasks with the framework. Each subtask
contains a number of decisions to be made and/or
actions to be executed, depending on the levels of
function and information processing. We can now
decide individually for each subtask, whether a human
or machine agent should perform it primarily. Currently
available advanced driver assistance systems (ADAS)
work very well on the perceptual level (object
perception), and complement and enhance human
capabilities or substitute the human agent on the
operational, tactical and strategic level (e.g., forward
collision warning, adaptive cruise control, traffic sign
recognition). However, ADAS often fail to correctly
“understand” the perceived object (object recognition),
and have to be monitored by the human agent.
Following the suggested steps allows to determine
footprints of involved entities. A footprint in our
framework is a subset of the fields/subtasks that fulfills
a condition like “supported by the automation” or
“information required for novice users”. This allows
using the framework for different purposes - a
manufacturer or vehicle designer can evaluate the
footprint of a specific vehicle’s capabilities on the
individual high-level tasks. A problem with
vehicle/ADAS usage nowadays is, that some users
utilize combinations of Level 2 ADAS in the same way
as they would interact with a Level 3 vehicle. For
instance, a vehicle with a combination of adaptive
cruise control (AAC) and lane keeping assist systems
(LKAS), that in some situations might perform similar
to a full Level 3 automated driving function (highway
driving) will definitely fail in others (ACC fails in narrow
curves or cannot detect pedestrians). Comparison of
both vehicles will result in a completely different
footprint on our framework. Designing vehicle HMI with
respect to such differences could thus prevent misuse
and foster human distrust in subtasks the automation
cannot cope with. Second, not only vehicles but also
users/personalities will have an individual footprint on
our framework that can help to personalize trust
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
36
establishment. For instance, novice users might be
supported with “why and how information” [5] on an
operational level while experienced users who are
already familiar with overtaking maneuvers issued by
their vehicle will receive feedback only on a strategical
level. Such differences might not only be available due
to different levels of expertise but also age, gender,
personality, or culture. In the end, HMI designs and
methods can be targeted to the individual subtasks and
users, allowing for personalized trust calibration.
Summary and Future Work
The framework we outline in this paper allows a
detailed separation of vehicle tasks into subtasks and
HMI trust calibration for each of these subtasks,
depending on the capabilities and level of competence
OVERTAKING
Perceive
Understand
Predict
Adapt
Operational
Deviations of other
vehicle in either
speed or lane
position (yes/no +
values)
Presence of
oncoming traffic
(yes/no)
Overtaking
maneuver is
being initiated
(yes/no)
Pulling out
will be
completed in
x time units
Pulling back
in will initiate
in y and be
completed in
z time units
Further pull out
if lateral lane
position of the
to-be-passed
vehicle changes
(distance units)
Adjust speed if
speed of the to-
be-passed
vehicle changes
(speed units)
Tactical
Observe speed
limit (speed units)
Distance to and
velocity of
oncoming traffic if
oncoming traffic =
yes on operational
level (distance and
speed units)
The conditions
are suitable for
initiating the
overtaking
maneuver
(yes/no)
It is possible
to finish the
overtaking
maneuver
successfully
(yes/no)
Continue
executing
overtaking
maneuver
or
Abort overtaking
maneuver
Strategic
Road and lane
widths (distance
units)
"No Overtaking"
sign present
(yes/no)
This is a suitable
road segment for
overtaking
(yes/no)
Next suitable
road segment
for overtaking
approaches in
x distance
units / y time
units
Decision:
overtaking
maneuver
begins in x
distance units /
y time units.
Table 1. Driving task overtaking on a rural road is split into 12 subtasks via the framework. Whenever appropriate, each subtasks
contains brief outline of scale (binary vs. multi) and measures (time, distance, speed) involved.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
37
of either the system and human operator. The
framework is an initial one and can, at this stage, not
account for contextual differences (lane width, weather,
driver’s cognitive state, etc.) nor does it take other
road users and their actions into account. Nevertheless,
we consider this framework a good starting point and a
future expanded framework to be a potentially very
valuable tool for HMI design in automated vehicles (and
perhaps other automated systems). The framework and
associated approach also show that trust and distrust
needn’t always be at opposite ends and that both –
when sensibly placed – can lead to more certainty,
safety, and, in turn, the coveted overall acceptance of a
technology, when the system’s capabilities exactly
match the human’s expectations in the system.
Acknowledgements
This paper is partly based on discussions from the
Dagstuhl Seminar 16262 (www.dagstuhl.de/16262).
The financial support by the Austrian Science Fund
(FWF): I 2126-N15 is gratefully acknowledged.
References
1. S. Parkin. 2016. Learning to Trust a Self-Driving
Car. Retrieved August 02 2016 from
http://www.newyorker.com/tech/elements/learning
-to-trust-a-self-driving-car.
2. SAE J3016_201401. 2014. Taxonomy and
Definitions for Terms Related to On-Road Motor
Vehicle Automated Driving Systems.
3. D. A. Dickie und L. D. Boyle. 2009. Drivers'
Understanding of Adaptive Cruise Control
Limitations. Proceeding of the Human Factors and
Ergonomics Society Annual Meeting. 10.
4. M. S. Casner, L. E. Hutchins and D. Norman. 2016.
The Challenges of Partially Automated Driving.
Communications of the ACM. 59, 5.
5. F. Ekman, M. Johansson und J. L. Sochor. 2016.
Creating Appropriate Trust for Autonomous Vehicle
Systems: A Framework for HMI Design.
Proceedings of the 95th Annual Meeting of the
Transportation Research Board.
6. E. J. de Visser, M. Cohen, A. Freedy und R.
Parasuraman. 2014. A Design Methodology for
Trust Cue Calibration in Cognitive Agents. In
International Conference on Virtual, Augmented
and Mixed Reality.
7. C. Clases. Vertrauen [trust]. 2016. In M. A. Wirtz
(ed.), Dorsch – Lexikon der Psychologie [Dorsch -
encyclopedia of psychology].
Retrieved July 28 2016, from
https://portal.hogrefe.com/dorsch/vertrauen/
8. J. D. Lee und K. A. See. 2004. Trust in Automation:
Designing for Appropriate Reliance. Human
Factors: The Journal of the Human Factors and
Ergonomics Society. 46, 1, 50-80.
9. R. C. Mayer, J. H. Davis und F. D. Schoorman.
1995. An Integrative Model of Organizational Trust.
Academy of Management Review. 709-734.
10. P. M. Fitts et al. 1951. Human Engineering for an
Effective Air-Navigation and Traffic-Control System.
National Research Council, Washington DC.
11. D. A. Norman. 1990. The Problem with
Automation: Inappropriate Feedback and
Interaction, not Over-Automation. Phil. Trans. of
the Royal Society of London. 585-593.
12. R. Parasuraman and V. Riley. 1997. Humans and
Automation: Use, Misuse, Disuse, Abuse. Human
Factors: The Journal of the Human Factors and
Ergonomics Society. 39, 2, 230-253.
13. B. M. Muir. Trust in automation: Part I. Theoretical
issues in the study of trust and human intervention
in automated systems. Ergonomics. 37, 11, pp.
1905-1922, 1994.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
38