Conference PaperPDF Available

A Framework for Analyzing and Calibrating Trust in Automated Vehicles

Authors:
  • University of Applied Sciences Upper Austria / Hagenberg

Abstract

When predicting the traffic of the future and the acceptance of automated vehicles, we often like to assume that one of the major challenges will be to foster overall trust in automated vehicles for effective and safe mixed-traffic operations. In this paper, we propose a more faceted viewpoint and argue for the benefits of -- and provide an initial framework for --calibrated trust by fostering trust and distrust in automated vehicles. If drivers know exactly what their vehicle is and is not capable of, then they are more likely to react properly and be prepared when handover requests or other unexpected circumstances might occur.
A Framework for Analyzing and
Calibrating Trust in Automated
Vehicles
Abstract
When predicting the traffic of the future and the
acceptance of automated vehicles, we often like to
assume that one of the major challenges will be to
foster overall trust in automated vehicles for effective
and safe mixed-traffic operations. In this paper, we
propose a more faceted viewpoint and argue for the
benefits of and provide an initial framework for
calibrated trust by fostering trust and distrust in
automated vehicles. If drivers know exactly what their
vehicle is and is not capable of, then they are more
likely to react properly and be prepared when handover
requests or other unexpected circumstances might
occur.
Author Keywords
Accessible Computing; Assistive Technology;
Automated Vehicles.
ACM Classification Keywords
H.5.2. [Information interfaces and presentation (e.g.,
HCI)]: User Interfaces; J.4 [Computer Application]:
Social and Behavioral SciencesPsychology.
Introduction and Empirical Background
In contrast to today’s highly complex automated
systems, which are mainly controlled by experts
(airplanes, power plants, etc.), the operators of the
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for third-
party components of this work must be honored. For all other uses, contact
the Owner/Author.
Copyright is held by the owner/author(s).
Automotive'UI 16 Adjunct, October 24-26, 2016, Ann Arbor, MI, USA
ACM 978-1-4503-4654-2/16/10.
http://dx.doi.org/10.1145/3004323.3004326
Alexander G. Mirnig
Center for Human-Computer
Interaction
Christian Doppler Laboratory
“Contextual Interfaces” &
Department of Computer Sciences
University of Salzburg
5020 Salzburg, Austria
firstname.lastname@sbg.ac.at
Philipp Wintersberger
CARISSMA
University of Applied Sciences
Ingolstadt
85049 Ingolstadt, Germany
philipp.wintersberger@carissma.eu
Christine Sutter
Ergonomics & System Design
Institute of Ergonomics & Human
Factors
Mechanical Engineering
Technische Universität Darmstadt
64287 Darmstadt, Germany
c.sutter@iad.tu-darmstadt.de
Jürgen Ziegler
Interactive Systems Group
University of Duisburg-Essen
47057 Duisburg, Germany
juergen.ziegler@uni-due.de
Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’16), October
24–26, 2016, Ann Arbor, MI, US.
33
future will be the much more diverse class of
consumers. From a Human Factors perspective, trust in
technical systems and their trustworthiness is a major
challenge, and a “... paradox facing auto engineers:
how to design self-driving cars that feel trustworthy
while simultaneously reminding their occupants that, no
matter how pristine a given model’s safety record, no
driver - human or artificial - is perfect. How, in other
words, to free drivers from the onus of driving, while
burdening them with the worry that, at any moment,
they will need to take back control.” [1]. The quote
refers to semi-automated vehicles at SAE-Level 3
(where the driver will be occasionally and at short
notice required to take over to ensure safety) or lower
levels of automation [2]. Semi-automated vehicles at
Level 3 might be among the first automated systems
available for the public that enable drivers to be in
charge of complex computer systems in highly safety-
critical environments. The world’s first fatality in semi-
automated driving (accident with Tesla’s Autopilot
system on May 7th. 2016) can be seen as a trust issue
(overtrust in this case), as the driver was maybe not
aware of the system’s limits and/or did not monitor the
environment carefully enough in order to compensate
for the system’s failure. This is already an issue in
lower levels of automation. Dickie and Boyle [3] could
show that a large group of drivers were not aware of
the boundaries of their adaptive cruise control (ACC)
system, and used it in situations for which it was not
suitable. Trust is one of the most critical and widely
debated factors for user acceptance of automated
functions in vehicles, ranging from more restricted
driver assistance (in Level 0 to 3 vehicles) to fully
automated vehicles (Level 5). The factors influencing
users’ trust in automated vehicles cover a wide range
of aspects, not all of them related to the vehicle itself
and the reliability and safety of its operation, such as
the credibility of the engineers who built the car [4].
In accordance with Ekman [5] and de Visser [6] trust
can be defined as follows: Trust is a relation between
at least two agents (trustor and trustee, see Clases [7])
expressing an expectation that an agent (trustor) will
help achieve another agent’s goals (trustee) in a
situation characterized by uncertainty and vulnerability.
Depending on the agents participating in a trust
relation, we can distinguish between user-user trust
(also known as interpersonal trust), user-system trust,
and system-system trust. Overtrust occurs when the
expectation in the other agent is higher than that
agent’s capabilities to help achieve the respective goal,
and undertrust occurs when the expectation in the
other agent is lower than that agent’s capabilities to
help achieve the respective goal. A trust relation, in
which neither over- nor undertrust occurs is called
calibrated trust. Accordingly, trust calibration is the
process of balancing user trust to the required level.
Apart from that, distrust is a relation between at least
two agents expressing an expectation that an agent will
not (or not to a sufficient degree) help achieve another
agent’s goals in a situation characterized by uncertainty
and vulnerability. In the present paper, we focus on
user-system trust and its application to vehicle users of
semi- and fully automated vehicles.
Related Work
To define user-system trust, Lee and See [8] adapted
the interpersonal trust definition by Mayer and
Schoorman [9] (as summarized by Ekman et al. [6]):
“trust is built on the possibility to observe the system’s
behavior (performance), understand the intended use
of the system (purpose), as well as understand how it
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
34
makes decisions (process)”. Since Paul Fitts [10]
suggestion that humans are poor at monitoring
automated systems (HABA-MABA principle), the idea
was to get humans completely out of the loop, until
Norman [11] stated that many prominent catastrophes
were more the result of missing or inappropriate
feedback rather than pure human error. Parasuraman
and Riley [12] introduced the terms use, misuse and
disuse as trust-based modes of interaction with
automated systems and Muir [13] called the process of
establishing appropriate trust “calibration of trust”.
Ekman et al. [5] presented a theoretical framework for
trust in vehicles and stated, that “a holistic approach is
necessary, as trust development starts long before a
user’s first contact with the system and continues long
thereafter”, and de Visser et al. tried to propose a
framework for trust cue calibration from a more
abstract view [6].
Motivation
Trust in automated systems is a multifaceted and hard
to predict phenomenon. In automated driving, we often
strive towards acceptance and trustworthiness
regarding automated vehicles. However, like any
system, automated vehicles are fallible a trait, which
they share with humans. We consider the increasing
automation of vehicles a cooperative context, where
both agents have certain strengths and weaknesses
and one compensates for the other. This entails
actually knowing what those weaknesses are and not
blindly trusting in capabilities that might not exist.
Thus, there are situations, where distrust in automation
is just as valuable as trust it all depends on the task
and its context and is not necessarily contrary to a
disposition of overall trust in the automated system.
For a better understanding of user-system trust we see
a need for a more systematic analysis of trust-related
factors which can help designers and developers to
understand the issues and to explore the design space
of potential trust-supporting functions in a more
structured way. In the following, we introduce a
framework that aims to provide such an analysis and
design space by defining different levels of function
automation as well as different levels of information
processing required for a successful user-system-
environment interaction. We specifically want to
contrast undertrust with distrust. Undertrust implies a
non-symmetric expectation in an agent’s capabilities.
Distrust does not make such an implication, as distrust
can be warranted. This should allow for a more detailed
analysis of driving tasks and decisions regarding the
capabilities of executing each subtask. We can then
design for calibrated trust via specifically targeted HMI
design solutions by evoking either trust or distrust, so
that neither over- nor undertrust occurs.
Initial Framework
Starting with the functional levels, the operational level
is the lowest level of driving functions and includes
individual and elementary driving tasks, such as
braking, turning the wheel, checking the mirror(s), etc.
The tactical level is the middle level of driving
functions. Tasks on the tactical level are usually
referred to as maneuvers and consist of several
operational tasks. The strategic level is the highest
level reserved for overall tasks. The most common
example of a driving task on the strategic level is route
planning. For modelling the interaction between user,
system and environment we follow a bottom-up
approach. The perceptual level (“perceive”) is the
lowest, and comprises just the registration and
Framework Overview
Levels of function
(x-axis): operational,
tactical, strategic (from
lowest to highest).
Levels of Information
Processing
(y-axis: perceive,
understand, predict, adapt.
Trust calibration: Each
driving task can be split up
into subtasks via the
framework. Depending on
whether the human or the
machine are better suited to
complete a certain subtask,
HMI design should either
foster trust (in case the
machine is more capable) or
distrust (in case the human is
more capable). When the
expectations regarding
achieving the respective goal
between trustor and trustee
match, a state of calibrated
trust is achieved.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
35
(bottom-up) transfer of sensory information for further
processing. Information processing for a successful
user-system-environment interaction can either aim to
explain (“understand”) certain actions or events, to
anticipate (“predict”) certain outcomes or
consequences, or to “adapt” to changes in the user-
system-environment relationship. In their design
methodology for trust cue calibration, de Visser et al.
[6] pursued a slightly different approach using
perception, comprehension, projection, decision, and
execution. In our framework, we introduce adaptation
as a third challenge in the user-system-environment
interaction. Adaptation refers to the flexibility of an
information processing system to rapidly compensate
for and adapt to changes. The framework works in
three steps: In step one, individual vehicle tasks are
split up into parts via functional levels on the y-axis
and stages of information processing on the x-axis. This
covers the system side of the framework. In step two,
each individual driving task part from step one is then
matched to trust-relevant factors for the human
individual. This covers the human side of the
framework. In step three, the trust relevant factors can
then be targeted individually with HMI design solutions,
depending on whether trust or distrust in the vehicle’s
capabilities regarding each subtask should be fostered
in an implementation.
Example: Overtaking
In Table 1, we provide an example of how a driving
task here overtaking on a rural road can be split up
into subtasks with the framework. Each subtask
contains a number of decisions to be made and/or
actions to be executed, depending on the levels of
function and information processing. We can now
decide individually for each subtask, whether a human
or machine agent should perform it primarily. Currently
available advanced driver assistance systems (ADAS)
work very well on the perceptual level (object
perception), and complement and enhance human
capabilities or substitute the human agent on the
operational, tactical and strategic level (e.g., forward
collision warning, adaptive cruise control, traffic sign
recognition). However, ADAS often fail to correctly
“understand” the perceived object (object recognition),
and have to be monitored by the human agent.
Following the suggested steps allows to determine
footprints of involved entities. A footprint in our
framework is a subset of the fields/subtasks that fulfills
a condition like “supported by the automation” or
“information required for novice users”. This allows
using the framework for different purposes - a
manufacturer or vehicle designer can evaluate the
footprint of a specific vehicle’s capabilities on the
individual high-level tasks. A problem with
vehicle/ADAS usage nowadays is, that some users
utilize combinations of Level 2 ADAS in the same way
as they would interact with a Level 3 vehicle. For
instance, a vehicle with a combination of adaptive
cruise control (AAC) and lane keeping assist systems
(LKAS), that in some situations might perform similar
to a full Level 3 automated driving function (highway
driving) will definitely fail in others (ACC fails in narrow
curves or cannot detect pedestrians). Comparison of
both vehicles will result in a completely different
footprint on our framework. Designing vehicle HMI with
respect to such differences could thus prevent misuse
and foster human distrust in subtasks the automation
cannot cope with. Second, not only vehicles but also
users/personalities will have an individual footprint on
our framework that can help to personalize trust
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
36
establishment. For instance, novice users might be
supported with “why and how information” [5] on an
operational level while experienced users who are
already familiar with overtaking maneuvers issued by
their vehicle will receive feedback only on a strategical
level. Such differences might not only be available due
to different levels of expertise but also age, gender,
personality, or culture. In the end, HMI designs and
methods can be targeted to the individual subtasks and
users, allowing for personalized trust calibration.
Summary and Future Work
The framework we outline in this paper allows a
detailed separation of vehicle tasks into subtasks and
HMI trust calibration for each of these subtasks,
depending on the capabilities and level of competence
OVERTAKING
Perceive
Predict
Adapt
Operational
Deviations of other
vehicle in either
speed or lane
position (yes/no +
values)
Presence of
oncoming traffic
(yes/no)
Pulling out
will be
completed in
x time units
Pulling back
in will initiate
in y and be
completed in
z time units
Further pull out
if lateral lane
position of the
to-be-passed
vehicle changes
(distance units)
Adjust speed if
speed of the to-
be-passed
vehicle changes
(speed units)
Tactical
Observe speed
limit (speed units)
Distance to and
velocity of
oncoming traffic if
oncoming traffic =
yes on operational
level (distance and
speed units)
It is possible
to finish the
overtaking
maneuver
successfully
(yes/no)
Continue
executing
overtaking
maneuver
or
Abort overtaking
maneuver
Strategic
Road and lane
widths (distance
units)
"No Overtaking"
sign present
(yes/no)
Next suitable
road segment
for overtaking
approaches in
x distance
units / y time
units
Decision:
overtaking
maneuver
begins in x
distance units /
y time units.
Table 1. Driving task overtaking on a rural road is split into 12 subtasks via the framework. Whenever appropriate, each subtasks
contains brief outline of scale (binary vs. multi) and measures (time, distance, speed) involved.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
37
of either the system and human operator. The
framework is an initial one and can, at this stage, not
account for contextual differences (lane width, weather,
driver’s cognitive state, etc.) nor does it take other
road users and their actions into account. Nevertheless,
we consider this framework a good starting point and a
future expanded framework to be a potentially very
valuable tool for HMI design in automated vehicles (and
perhaps other automated systems). The framework and
associated approach also show that trust and distrust
needn’t always be at opposite ends and that both
when sensibly placed can lead to more certainty,
safety, and, in turn, the coveted overall acceptance of a
technology, when the system’s capabilities exactly
match the human’s expectations in the system.
Acknowledgements
This paper is partly based on discussions from the
Dagstuhl Seminar 16262 (www.dagstuhl.de/16262).
The financial support by the Austrian Science Fund
(FWF): I 2126-N15 is gratefully acknowledged.
References
1. S. Parkin. 2016. Learning to Trust a Self-Driving
Car. Retrieved August 02 2016 from
http://www.newyorker.com/tech/elements/learning
-to-trust-a-self-driving-car.
2. SAE J3016_201401. 2014. Taxonomy and
Definitions for Terms Related to On-Road Motor
Vehicle Automated Driving Systems.
3. D. A. Dickie und L. D. Boyle. 2009. Drivers'
Understanding of Adaptive Cruise Control
Limitations. Proceeding of the Human Factors and
Ergonomics Society Annual Meeting. 10.
4. M. S. Casner, L. E. Hutchins and D. Norman. 2016.
The Challenges of Partially Automated Driving.
Communications of the ACM. 59, 5.
5. F. Ekman, M. Johansson und J. L. Sochor. 2016.
Creating Appropriate Trust for Autonomous Vehicle
Systems: A Framework for HMI Design.
Proceedings of the 95th Annual Meeting of the
Transportation Research Board.
6. E. J. de Visser, M. Cohen, A. Freedy und R.
Parasuraman. 2014. A Design Methodology for
Trust Cue Calibration in Cognitive Agents. In
International Conference on Virtual, Augmented
and Mixed Reality.
7. C. Clases. Vertrauen [trust]. 2016. In M. A. Wirtz
(ed.), Dorsch Lexikon der Psychologie [Dorsch -
encyclopedia of psychology].
Retrieved July 28 2016, from
https://portal.hogrefe.com/dorsch/vertrauen/
8. J. D. Lee und K. A. See. 2004. Trust in Automation:
Designing for Appropriate Reliance. Human
Factors: The Journal of the Human Factors and
Ergonomics Society. 46, 1, 50-80.
9. R. C. Mayer, J. H. Davis und F. D. Schoorman.
1995. An Integrative Model of Organizational Trust.
Academy of Management Review. 709-734.
10. P. M. Fitts et al. 1951. Human Engineering for an
Effective Air-Navigation and Traffic-Control System.
National Research Council, Washington DC.
11. D. A. Norman. 1990. The Problem with
Automation: Inappropriate Feedback and
Interaction, not Over-Automation. Phil. Trans. of
the Royal Society of London. 585-593.
12. R. Parasuraman and V. Riley. 1997. Humans and
Automation: Use, Misuse, Disuse, Abuse. Human
Factors: The Journal of the Human Factors and
Ergonomics Society. 39, 2, 230-253.
13. B. M. Muir. Trust in automation: Part I. Theoretical
issues in the study of trust and human intervention
in automated systems. Ergonomics. 37, 11, pp.
1905-1922, 1994.
Work-in-Progress and Interactive Demos AutomotiveUI Adjunct Proceedings ’16, Ann Arbor, MI, USA
38
... Undertrust can result in failure to accept and adopt technology (Ghazizadeh et al., 2012;Matsuyama et al., 2021;Zhang et al., 2019) or disuse of a system after it has already been adopted (Dzindolet et al., 2003;Parasuraman & Riley, 1997). Distrust is similar to undertrust; however, the principal difference is that distrust can be warranted (Mirnig et al., 2016). In fact, research has suggested that trust and distrust represent two distinct constructs (Kramer, 1999;Lewicki et al., 1998;Tenhundfeld et al., 2019). ...
Article
Full-text available
As automated and autonomous systems become more widely available, the ability to integrate them into environments seamlessly becomes more important. One cognitive construct that can predict the use, misuse, and disuse of automated and autonomous systems is trust that a user has in the system. The literature has explored not only the predictive nature of trust but also the ways in which it can be evaluated. As a result, various measures, such as physiological and behavioral measures, have been proposed as ways to evaluate trust in real-time. However, inherent differences in the measurement approaches (e.g., task dependencies and timescales) raise questions about whether the use of these approaches will converge upon each other. If they do, then the selection of any given proven approach to trust assessment may not matter. However, if they do not converge, it raises questions about the ability of these measures to assess trust equally and whether discrepancies are attributable to discriminant validity or other factors. The present study used various trust assessment techniques for passengers in a self-driving golf-cart. We find little to no convergence across measures, raising questions that need to be addressed in future research.
... This will affect not only the driving experience, but also the perspective of assistive technologies. This means we'll be in a better position to identify many more use cases for these kinds of smart technologies, increasing the likelihood that we'll be able to realise our goal of a highly intuitive driver-vehicle interaction [2,8,10,11]. ...
Conference Paper
Full-text available
Over the past few years, there has been increased emphasis placed on the research and development of in-vehicle advanced driver assistance systems (ADAS) that can be used in both traditional and self-driving (so-called, autonomous) vehicles. This is a huge step toward providing better comfort and improving the driver experience coupled with improvements to safety concerns. Despite this, we have found that drivers do not use the ADAS to its full potential in everyday use. This is something that has come to our attention. There could be a number of factors at play here. The primary purpose of this workshop is to shed light to the reasons why participants are not activating their ADAS and other comfort functions. In addition, it will serve as a useful benchmark against which to measure the progress of future driver expectations and requirements for ADAS.
... Physical aspects range from seating ergonomics (as AVs are impacting vehicle interiors (Salter et al., 2019)) to the physical design and placement of human-machine interfaces (HMIs). Typically, humans are directly affected by software aspects of HF, including: how and when the (software-based) HMIs display information (Carsten and Martens, 2019), how external road users are to be communicated with (Ackermann et al., 2019a;Faas et al., 2020), how the vehicle stays in the lane (Xu et al., 2017;Miller and Boyle, 2019), how it keeps its distance from a lead vehicle (De Winter et al., 2014;Reagan et al., 2017;Morando et al., 2016), how it overtakes other road users (Abe et al., 2017;Kovaceva et al., 2019), how humans and AVs communicate (Ackermann et al., 2019b), and how AVs can avoid driver over-reliance on the AV performance and ensure that the trust in the AV is properly calibrated (Mirnig et al., 2016;Kraus et al., 2020). These examples highlight the extent to which successful engineering depends on HF knowledge. ...
... Communicating awareness and intent has mainly been researched in two application domains so far: autonomous driving and human robot interaction. While some studies call for explicit interfaces to communicate awareness and [113], center: driver assistance [53], right: reliability visualizations of friend/enemy detection in combat situations [76] [48]; right: showing uncertainty in ensemble weather predictions [39] intent [12,17,62,71] other studies suggest that for routine situations implicit communication (via movement of the vehicle or robot) might be sufficient [90]. ...
Article
Full-text available
Despite the growing availability of data, simulation technologies, and predictive analytics, it is not yet clear whether and under which conditions users will trust Decision Support Systems (DSS). DSS are designed to support users in making more informed decisions in specialized tasks through more accurate predictions and recommendations. This mixed-methods user study contributes to the research on trust calibration by analyzing the potential effects of integrated reliability indication in DSS user interfaces for process management in first-time usage situations characterized by uncertainty. Ten experts specialized in digital tools for construction were asked to test and assess two versions of a DSS in a renovation project scenario. We found that while users stated that they need full access to all information to make their own decisions, reliability indication in DSS tends to make users more willing to make preliminary decisions, with users adapting their confidence and reliance to the indicated reliability. Reliability indication in DSS also increases subjective usefulness and system reliability. Based on these findings, it is recommended that for the design of reliability indication practitioners consider displaying a combination of reliability information at several granularity levels in DSS user interfaces, including visualizations, such as a traffic light system, and to also provide explanations for the reliability information. Further research directions towards achieving trustworthy decision support in complex environments are proposed.
Article
Full-text available
This research aims to assess the functionality of the VLP-32 LiDAR sensor, which serves as the principal sensor for object recognition in autonomous vehicles. The evaluation is conducted by simulating edge conditions the sensor might encounter in a controlled darkroom setting. Parameters for environmental conditions under examination encompass measurement distances ranging from 10 to 30 m, varying rainfall intensities (0, 20, 30, 40 mm/h), and different observation angles (0°, 30°, 60°). For the material aspects, the investigation incorporates reference materials, traffic signs, and road surfaces. Employing this diverse set of conditions, the study quantitatively assesses two critical performance metrics of LiDAR: intensity and NPC (number of point clouds). The results indicate a general decline in intensity as the measurement distance, rainfall intensity, and observation angles increase. Instances were identified where the sensor failed to record intensity for materials with low reflective properties. Concerning NPC, both the effective measurement area and recorded values demonstrated a decreasing trend with enlarging measurement distance and angles of observation. However, NPC metrics remained stable despite fluctuations in rainfall intensity.
Article
Full-text available
Practitioner Summary Conditional AV drivers are expected to take-over control during failures. However, drivers are not informed about the AV’s planned manoeuvres. A visual display that presents the shared intended pathway is proposed to help drivers mitigate silent failures. This online photo experiment found the display helped anticipate failures with 87% accuracy. The shared responsibility between conditional AVs drivers demands shared understanding. Thus, a shared intended pathway (SIP) - a graphical display of the AV’s planned manoeuvres in a head-up display to help drivers anticipate silent failures is proposed. An online, randomised photo experiment was conducted with 394 drivers in Australia. The photos presented traffic scenarios where the SIP forecast either safe or unsafe manoeuvres (silent failures). Participants were required to respond by selecting whether driver intervention was necessary or not. Additionally, the effects of presented object recognition bounding boxes which indicated whether a road user was recognised or not were also tested in the experiment. The SIP led to correct intervention choices 87% of the time, and to calibrating self-reported trust, perceived ease of use and usefulness. The bounding boxes found no significant effects. Results suggest SIPs can assist in monitoring conditional automation. Future research in simulator studies is recommended.
Conference Paper
Full-text available
As decision support systems have developed more advanced algorithms to support the human user, it is increasingly difficult for operators to verify and understand how the automation comes to its decision. This paper describes a design methodology to enhance operators’ decision making by providing trust cues so that their perceived trustworthiness of a system matches its actual trustworthiness, thus yielding calibrated trust. These trust cues consist of visualizations to diagnose the actual trustworthiness of the system by showing the risk and uncertainty of the associated information. We present a trust cue design taxonomy that lists all possible information that can influence a trust judgment. We apply this methodology to a scenario with advanced automation that manages missions for multiple unmanned vehicles and shows specific trust cues for 5 levels of trust evidence. By focusing on both individual operator trust and the transparency of the system, our design approach allows for calibrated trust for optimal decision-making to support operators during all phases of mission execution.
Article
Full-text available
Adaptive cruise control (ACC) is one system that is changing the driver-vehicle relationship. However, not all drivers are aware of the systems' capabilities or limitations. In this study, a cluster analysis was used to classify drivers based on how aware they were of the limitations associated with ACC. Three cluster groups emerged: those who are aware, unaware, and unsure of ACC limitations. Further examination revealed that drivers who were unaware or unsure exhibited potentially hazardous behavior when compared to the aware group. These two groups were more willing to use ACC when tired or on curvy roads. The unaware and unsure groups were also more likely to use conventional cruise control (CCC) in the absence of ACC. All three cluster groups reported high levels of trust in ACC. This may be problematic for the unaware and unsure groups since they may trust the system based on inappropriate expectations which can impact driver safety. Lower levels of awareness coupled with high levels of trust in ACC may correspond to potential misuse of the system. However, the findings suggest that this could be potentially mitigated through extended use of ACC.
Article
Full-text available
As automation increasingly takes its place in industry, especially high risk industry, it is often blamed for causing harm and increasing the chance of human error when failures do occur. I propose that the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations under normal operating conditions are performed appropriately, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for the human controllers. The problem, I suggest, is that the automation is at an intermediate level of intelligence, powerful enough to take over control that used to be done by people, but not powerful enough to handle all abnormalities. Moreover, its level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. This is the source of the current difficulties. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate. The overall message is that it is possible to reduce error through appropriate design considerations. Appropriate design should assume the existence of error, it should continually provide feedback, it should continually interact with operators in an effective manner, and it should allow for the worst situations possible. What is needed is a soft, compliant technology, not a rigid, formal one.
Article
AUTONOMOUS CARS PROMISE to give us back the time we spend in traffic, improve the flow of traffic, reduce accidents, deaths, and injuries, and make personal car travel possible for everyone regardless of their abilities or condition. But despite impressive demonstrations and technical advances, many obstacles remain on the road to fully autonomous cars.20 Overcoming the challenges to enabling autonomous cars to safely operate in highly complex driving situations may take some time. Manufacturers already produce partially automated cars, and a spirited competition to deliver the most sophisticated ones is under way. Cars that provide high levels of automation in some circumstances (such as highway driving) have already arrived in the marketplace and promise to be in the hands of a large number of car owners in the next few years.
Conference Paper
While autonomous vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for Autonomous Driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human-machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and autonomous vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user’s first contact with the system, and continues long thereafter. Furthermore, factors affecting trust change, both during user interactions with the system and over time; thus HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.
Article
Scholars in various disciplines have considered the causes, nature, and effects of trust. Prior approaches to studying trust are considered, including characteristics of the trustor, the trustee, and the role of risk. A definition of trust and a model of its antecedents and outcomes are presented, which integrate research from multiple disciplines and differentiate trust from similar constructs. Several research propositions based on the model are presented.
Article
The report surveys the most critical human factor problems in the design of an air-navigation and traffic-control system, indicates applicable existing psychological knowledge, and points out the direction for future research. The chapter headings are as follows: I. Objectives; II. The Air-Traffic Control Problem; III. Some Basic Questions in Designing an Air-Navigation and Traffic-Control System; IV. A General Approach to the Human Operator as Part of a Communication System; V. Visual Information Displays; VI. Problems of Direct Vision from Aircraft; VII. Voice Communication; VIII. System Research, Analysis, and Evaluation; IX. Putting Human-Engineering Data to Use. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.
Article
Sumario: Today many systems are highly automated. The human operator's role in these systems is to supervise the automation and intervene to take manual control when necessary. The operator's choice of automatic or manual control has important consequences for system performance, and therefore it is important to understand and optimize this decision process. In this paper a model of human trust in machines is developed, taking models of trust between people as a starting point, and extending them to the human-machine relationship
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.