Conference PaperPDF Available

Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving

Authors:

Abstract and Figures

To investigate the impact of visualizing car uncertainty on drivers’ trust during an automated driving scenario, a simulator study was conducted. A between-group design experiment with 61 Swedish drivers was carried out where a continuous representation of the uncertainty of the car’s ability to autonomously drive was displayed to one of the groups, whereas omitted for the control group. The results show that, on average, the group of drivers who were provided with the uncertainty representation took control of the car faster when needed, while they were, at the same time, the ones who spend more time looking at other things than on the road ahead. Thus, drivers provided with the uncertainty information could, to a higher degree, perform tasks other than driving without compromising with driving safety. The analysis of trust shows that the participants who were provided with the uncertainty information trusted the automated system less than those who did not receive such information, which indicates a more proper trust calibration than in the control group.
Content may be subject to copyright.
210
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
Presenting system uncertainty in automotive UIs for
supporting trust calibration in autonomous driving
Tove Helldin
University of Skövde
54128, Skövde,
Sweden
+46 500 44 83 83
tove.helldin@his.se
Göran Falkman
University of Skövde
54128, Skövde,
Sweden
+46 500 44 83 35
goran.falkman@his.se
Maria Riveiro
University of Skövde
54128, Skövde,
Sweden
+46 500 44 83 43
maria.riveiro@his.se
Staffan Davidsson
Volvo Car Corporation
405 31,Gothenburg,
Sweden
+46 31 59 98 24
sdavidss@volvocars.com
ABSTRACT
To investigate the impact of visualizing car uncertainty on drivers’
trust during an automated driving scenario, a simulator study was
conducted. A between-group design experiment with 59 Swedish
drivers was carried out where a continuous representation of the
uncertainty of the car’s ability to autonomously drive during snow
conditions was displayed to one of the groups, whereas omitted
for the control group. The results show that, on average, the group
of drivers who were provided with the uncertainty representation
took control of the car faster when needed, while they were, at the
same time, the ones who spent more time looking at other things
than on the road ahead. Thus, drivers provided with the
uncertainty information could, to a higher degree, perform tasks
other than driving without compromising with driving safety. The
analysis of trust shows that the participants who were provided
with the uncertainty information trusted the automated system less
than those who did not receive such information, which indicates
a more proper trust calibration than in the control group.
Categories and Subject Descriptors
H.1.2 [User/Machine Systems]: Human factors
General Terms
Design, Human Factors, Experimentation
Keywords
Uncertainty visualization, trust, automation, driving, acceptance.
1. INTRODUCTION
Technological advances have led to the development of numerous
driver assistance systems such as adaptive cruise control, lane
departure warning, collision avoidance, automatic parking and
driver drowsiness detection systems. Experiments with fully
autonomous cars have also been carried out, which provides us
with a glimpse of what the future might hold. The purposes of
developing such support systems are to make driving safer, easier,
more relaxing and more enjoyable. However, such goals can only
be achieved if the driver feels comfortable enough to hand over
control to the automation and if a good cooperation between the
driver and the automation can be achieved. Studies from other
domains, such as the aviation domain, have shown that the
anticipated positive effects of automation might be diminished
due to human-automation cooperation related problems, such as
automation misuse and disuse [1, 2], automation surprises and
mode confusion [3, 4], reduced situation awareness [5],
complacency as well as over reliance on the automation [6]. To
reduce the possible negative effects of automation, while at the
same time reinforce the positive ones and promote a safe and
appropriate usage of the automation, several researchers have
highlighted the importance of informing the human operators of
the strengths and limitations of the automated systems used, as
well as the continuous state of the automation (see for instance [7-
11]). For example, in the study by Stanton and McCaulder [12], it
became evident that the drivers had insufficient knowledge of the
limitations of the adaptive cruise control system, resulting in
collisions due to the drivers’ inappropriate levels of trust in the
automated system. This finding is in line with the research
reported by Dzindolet et al. [13]and McGuirl and Sarter [7] where
it was found that operators who were provided with continuous
feedback regarding the performance of the automated aid had
more appropriate trust in the aid than operators who were not
given such information.
We argue that more research is needed to evaluate the
effectiveness of providing feedback on changes in the automated
system’s capability during autonomous driving. To the authors’
knowledge, no research has addressed how to convey the limits
and performance of automatically driven cars to their drivers as a
means to achieve appropriate trust. As such, the objective of this
study was to evaluate the effects of visualizing a continuous
representation of car uncertainty on the drivers’ trust in the
automatic support system used. First, we wanted to assess if such
visualization would make the drivers able to more appropriately
calibrate their trust in the system while at the same time making
them aware of the limitations of the system. Secondly, due to one
of the motivations for introducing automation in cars, namely to
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from
Permissions@acm.org.
AutomotiveUI '13, October 28 -30 2013, Eindhoven, Netherlands
Copyright 2013 ACM 978-1-4503-2478-6/13/10…$15.00.
http://dx.doi.org/10.1145/2516540.2516554
211
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
enable drivers to feel more relaxed while driving and even to
perform other things while travelling, we also wanted to
investigate if displaying the uncertainty representation would
result in a higher number of drivers performing other tasks than
driving during the test scenario.Finally, we also wanted to test if
the drivers being presented with the uncertainty information
would look away more from the road than the drivers not being
provided with this information, while at the same time being
better prepared to take control over the car when/if needed (i.e.
requiring less time to take manual control over the car).
The paper is structured as follows: section 2 provides information
regarding advances within the area of uncertainty and system
reliability visualization. Section 3 presents the study setup,
whereas section 4 reports on the study findings and section 5
presents a brief analysis of the results. Section 6discusses the
results obtained whereas the conclusions and ideas for future work
are found in section 7.
2. RELATED WORK
The visualization of uncertainty in the context of automatic
driving has been recently studied by Beller et al. [14]. The aim of
this study was to evaluate whether communicating when the car
was uncertain using a symbol (a face with an uncertain
expression) improved the driver-automation interaction. A driving
simulator experiment varying the level of uncertainty with 28
participants was conducted. The results show that the presentation
of uncertainty information increasedthe time to collision in cases
of automation failure, that situation awareness was improved and
that automation with the uncertainty symbol received increased
acceptance and higher trust ratings. These positive results
regarding the visualization of information related to smart systems
in cars seem to coincide with two previous studies, i.e., Verberne
et al. [15]and Seppelt and Lee [8].
Seppelt and Lee [8] investigated if a visual representation of the
adaptive cruise control (ACC) behavior promote appropriate
reliance and support effective transitions between manual and
ACC control. Twenty-four participants were recruited to drive in
two different situations, with different failure types. In traffic
conditions, the participants relied more appropriately on ACC
when the information about the ACC was present. Moreover, it
promoted faster and more consistent braking responses and show
additional positive effects in other traffic situations. The authors
suggest that providing drivers with continuous information about
the state of the automation is a promising alternative to providing
warnings.
The work presented by Verberne et al. [15]focuses on
investigating if representations of descriptors of three ACCs with
different automation levels that either shared their driving goals or
not affected trustworthiness and acceptability of those systems. A
driving experiment with 57 participants was carried out. The
results show that ACCs that took over driving tasks while
providing information were more trustworthy and acceptable than
ACCs that did not provide information.
Several relevant works regarding the influence of uncertainty
visualization on decision-making can be found in other research
areas, such as the military domain. For example, Finger and
Bisantz [16]studied the use of blended and degraded icons to
represent uncertainty regarding the identity of a radar contact as
hostile or friendly. The first part of the study showed that
participants could sort, order and rank five different sets of icons
conveying different levels of uncertainty. In the second part of the
study, three of the pairs of icons were used in an application in
which participants should identify the status of contacts as
friendly or hostile. Three conditions were studied: with degraded
icons and probabilities, with non-degraded icons and probabilities
and with degraded icons only. The results demonstrate that
participants using displays with only degraded icons performed
better on some measures and as well on other measures, than the
other tested conditions. Thus, the use of distorted or degraded
images may be a viable alternative to convey situational
uncertainty.
Wang et al. [17]examined the effects of presenting the aid
reliability on trust and reliance on a combat identification (CID)
scenario. Twenty-four participants carried out a simulated CID
task, half of whom were told the reliability level. The results
show that response bias varied more appropriately with the aid
reliability when it was disclosed than when not, and that trust in
aid feedback correlated with belief in aid reliability. The authors
highlight that to engender appropriate reliance on CID systems,
users should be made aware of system reliability.
3. METHOD
3.1 Participants
A total of 59 participants (31 male, 28 female) between 28and 58
years old (4 between 21–30 years, 22between 3140, 25between
41–50 and 8between 51–60) with an average age of 41,2 years
took part in the simulator experiment.The participants were
selected from a population of 488 Volvo employees, mostly non-
technical personnel of whom none is involved in the development
of functionality for autonomous driving or the implementation of
the driver’s information module (DIM). The only prerequisite for
taking part in the study was that the participant had a driver’s
license.
Each participant was randomly assigned to a display condition. A
balanced latin square design was used in order to minimize the
effects of participants driving early in the morning, directly after
lunch and late in the afternoon. This led to 30 participants (16
males and 14 females) driving with the added uncertainty
information and 29 participants in the control group.
3.2 The DIM
Two interfaces were designed one with and one without the
uncertainty representation. Figure 1shows a sketch of the DIM
design including: the speedometer, the engine speed, the fuel
level, the outside temperature, the current time, the current gear
used, the placement of the steering wheel during the autonomous
drive as well as the ability of the automation to maneuver the car.
During the experiments performed together with the control
group, the information regarding the automation ability was taken
away. Figur e 2depicts two states of the ability of the automation,
from high ability (left figure) to low ability (right figure) as
indicated by the color/transparency of the 7 levels, as well as the
red arrow, representing the threshold where the ability of the
automation no longer can be guaranteed.
212
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
Figure 1 - Sketch of the DIM used during the experiments
(including the representation of uncertainty).
Figure 2 - Graphical representation of the ability of the car to
drive autonomously, ranging from 7 (very high ability, left
figure), to 1 (no ability, right figure). The red marking
indicates the threshold for when the performance of the
automated driving system no longer can be guaranteed.
The DIM was placed in the instrument cluster of the car in front
of the driver. The design of the interfaces was carried out in
collaboration with an expert HMI designer at Volvo cars. As such,
we argue that its design is similar to other interfaces used in
Volvo cars.
3.3 Procedure and questionnaire
The participants were first informed of the purpose and setup of
the study. Thereafter, all participants were allowed to drive the
car simulator in manual mode for about 3–5 minutes so as to get
acquainted with the simulator. Directly after the training session,
the participants were informed of the prerequisites of the test
session: that the car could drive autonomously,but that the
performance of the automatic driving system was coupled to the
weather conditions. The participants were also informed that they
could at any time take control over the car by
steering/braking/giving gas to the car in accordance with their own
assessment of the appropriateness of using the system. The DIM
was explained to both of the groups; however, the uncertainty
representation (see Figure 1) was presented and explained to only
one of the groups (hereafter “with uncertainty information
group”).Before the start of the test session, the participants were
informed that there were newspaper and sweets at their disposal
in the passenger seat if they so pleased. Thereafter, the 9-minute
test session started.
After the test session, the participants were asked to fill out a
questionnaire about their trust in the system, using a modified
version of the trust in automation scale [18]. The participants
answered seven questions such as “I am confident in the system”
and “I can trust the system” using a seven point Likert scale
ranging from 1 (fully disagree) to 7 (fully agree). The instructions
given to the drivers and the questionnaire can be found in the
appendix.
3.4 Simulator
The experiments were carried out at the Human Machine
Interaction (HMI) laboratory at Volvo Car Corporation,
Gothenburg, Sweden. The laboratory contains several integrated
systems: a driving simulator and a fully functioning cockpit (see
Figures 3 and 4).
Figure 3 - Driving simulator, Volvo Car Cooperation HMI lab.
Figure 4 - Overview of HMI lab
The participants drove the car simulator through a snowed two-
lane country-side road, with a number of sharp turns, but with no
other traffic (see Figure 5). Due to the weather conditions, the
intensity of snowing varied from 0% to 100% where the
maximum amount of snowing is illustrated in Figure 6, and the
minimum amount (0%) corresponds to a clear sky with full
visibility. The snowing intensity varied according to Figure 7. The
direction and speed of the car were controlled by the automation.
The speed of the car was independent of the snowing intensity
(see Figure 7), but did change at times according to road
conditions (sharp turns and hilltops). When the visibility was the
worst, the car simulator could no longer maneuver the car. At this
moment, the driver had to act accordingly by taking control of the
car (either braking or steering). Following the scenario used in the
experiments, the driver had to take manual control over the car in
a slight curve. At this event, the automation stopped working
213
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
completely,meaning that no gas or steering was provided, without
giving any warning to the driver (apart from the graphical
representation in Figure 2 and switching to the manual DIM).
Figure 5 - The route used during the tests
Figure 6 - Picture displaying the maximum amount of snowing
Figure 7 - The snowing intensity during the experiments
3.5 Collected data
Logs from each simulator session were recorded. The quantitative
data thus collected corresponds to values from steering angle,
brake, acceleration, look away time and weather conditions.
Cameras were used to record all the sessions.
In addition to the quantitative data, qualitative data was collected
through observing the participants. The data collected included to
which extent the driver had his/her hands on the steering wheel, if
the participant stayed on the road after take-over and if the
participant read the newspapers or ate the sweets provided. Time
to take-over (TTO) was calculated by analyzing the logged data,
measuring the time between the snowing intensity reaching its
maximum (1.0) and a significant change in either braking or
steering (indicating that the driver had taken control over the
vehicle).
4. RESULTS
Regarding time to take-over, the group provided with the
uncertainty representation needed 1.9 seconds to take control of
the car on average while the control group needed 3.2 seconds.
The individual results are shown in Figure 8below.
Figure 8 - Individual results of time to take over
The differences between the two groups are summarized in
Figur e 9below.
Figure 9–Summary of time to take over
The results were submitted to a one-way ANOVA analysis. The
analysis showed that, with a 95% certainty,there is a statistically
significant difference between the two groups [F(1, 57)]= 5.62,
p=0.02].
Regarding looking away from the road, the group provided with
the uncertainty representation, on average spent 18% of the
driving time looking at other things but the road, while the control
group was looking at other things 12% of the time on average. The
individual results are shown in Figure 10 below.
214
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
Figure 10 Individual results of proportion of look away time
The differences between the two groups are summarized in
Figur e 11 below.
Figure 11 Summary of proportion of look aw ay time
A one-way ANOVA analysis showed that, with a 95% certainty
there is a statistically significant difference between the two
groups [F(1, 57)]= 4.81 , p= 0.03].
In addition to the proportion of the total time spent on looking at
other things than the road ahead, the number of times drivers
looked away for more than 2 seconds were counted, since this is
regarded as the limit for how long a driver can look away while
maintaining awareness of the situation ahead [19]. The group
provided with the uncertainty representation looked away for
more than 2 seconds 8.0 times on average, while the control group
looked away 5.0 times on average (see Figure 12 below).
Figure 12 Summary of look away periods > 2 seconds
A one-way ANOVA analysis (α = 0.05) showed that there is no
statistically significant difference between the two groups with
respect to looking away from the road ahead for longer periods of
time [F(1, 57)]= 2.54, p=0.12].
Trust was assessed using the scale for trust in automated systems
developed by Jian et al. [18]. Participants of both groups answered
the questions after the driving exercise, using a seven point Likert
scale ranging from 1 (fully disagree) to 7 (fully agree). The
questions are listed in the appendix. The results are shown in
figures 13-14. The mean of the scores was used as an overall trust
score (as presented by Beggiato and Krems [10]). The average
trust value for the control group was 5.30, while the group with
uncertainty representation shows an average trustworthiness of
4.89.
Reliability was measured using Cronbach’s alpha values. The
values obtained, 0.87 (with uncertainty representation) and 0.85
(control group) show a good internal consistency (0.8 ≤ α < 0.9).
Figure 13 Whisker plot representation of the answers to the
trust questionnaire for both groups (with and without
certainty representation). Values are between 1 (min, fully
disagree) and 7 (max, fully agree).
215
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
Figure 14 Answers to trust questions. The control group
scores higher in all the questions but the first one (“I
understand how the system works its goals, actions and
output”)
The analysis of the participants’ responses regarding system’s
trustworthiness (see questions in the appendix) show that, on
average, the control group perceives the car without information
about uncertainty more trustworthy (mean=5.30 vs. mean =4.89).
5. ANALYSIS
The results show that presenting (un)certainty information results
in better prepared drivers in take-over situations. The difference in
look away times between the two groups manifested itself in that
of the 33 drivers that stayed on the road after take-over, 20 (61%)
were drivers provided with the uncertainty information.
Furthermore, the results show that drivers presented with
(un)certainty information look away from the road to a higher
degree. Although looking away more in terms of total time
compared to the control group, the drivers who were presented
with the uncertainty information did not look away for longer
periods of time more often.
Lastly, the collected qualitative data indicate that drivers who
were presented with the uncertainty information would perform
other tasks than driving during the test scenario. Of the 15 drivers
that read the newspapers, 9 (60%) were from the test group. Of
the 21 drivers that ate of the sweets, 11 (52%) were from the test
group. More important, of the drivers that read the papers and
drove off the road at take-over, only 20% were from the test
group. Of the 28 drivers that, to a lesser or greater extent, kept
their hand on the steering wheel during the test, 12 (43%) were
drivers provided with the uncertainty information. More
important, of the drivers that kept their hands on the wheel and
drove off the road at take-over, only 20% were from the test
group.
To summarize, the results show that drivers provided with the
uncertainty information performed better in take-over situations
and they were are also more comfortable with performing other
tasks while driving, as compared to drivers without this
information.
6. DISCUSSION
Results from the study show that the drivers who were informed
of the car uncertainty were better prepared in take-over situations.
Also, these drivers had better calibrated their trust in the
automatic driving system, whereas the control group reported on
higher trust ratings despite the needed manual take-over in the
scenario used in the training sessions. These findings are in line
with the work presented by McGuril and Sarter [7] where the
participants who were informed of the system confidence were
better able to more appropriately calibrate their trust in the
decision aid.
Even though the drivers with certainty information were better
prepared to take control of the car while, on average, spending
more time doing other activities, the results show that the trust
scores for this group were worse than the group without aid. The
findings reported in this paper are in contrast to the ones reported
by Beller et al. [14]and Seppelt and Lee [8], that recommend that
providing drivers with continuous information about automation is
preferable to providing warnings, and that information about
automation increase trust and acceptance. A possible explanation
for interpreting our results can be found in Dzindolet et al. [13],
where the role of trust in automation reliance is studied. Their
findings suggest that participants initially considered automated
decision aidstrustworthy and reliable, but, after observing the
automated aid make errors, participants distrusted even reliable
aids, unless an explanation was provided regarding why the aid
might err. Knowing why the aid might err increased trust in the
decision aid and increased automation reliance, even when the
trust was unwarranted. Thus, it should be further investigated if
the visual representation of the car uncertainty used in our study
should be complemented with additional information regarding
why the level of uncertainty was high.
Another representation of the car’s ability to autonomously drive
could have generated different results than the ones obtained.
According to Seppelt and Lee [8], not just any representation of
continuous information will enhance driving performance. In the
experiment presented in [8], it was concluded that the use of color
dilution to represent sensor degradation in a rain condition was
not an effective cue. Further, to combine a graphical
representation of uncertainty together with a haptic and/or sound
etc. could result in better performance regarding time to take over
times and longer look away times.
Individual differences in trust in automation and automation
reliance should be further explored. Lee and Moray [20,21]found
strong individual differences in automation use some
participants were prone to use manual control, others were prone
to use automation. As such, future work should include a further
analysis of the results in relation to the participants’ estimated
locus of control and driving styles.
Several limitations of the study should be mentioned. The driving
scenario and exercise might be considered very simple, but it was
designed to analyze the effect of uncertainty visualization on trust
and how the drivers would take over the control of the car when
automation could no longer guarantee a safe drive, thus, we tried
to minimized other experimental variables that could affect the
driving task and affect our analysis on trust and automation
reliance. It might be that the simplicity of the scenario made some
of the participants in the test group neglect the uncertainty
representation and concentrated on the weather conditions instead.
Moreover, the uncertainty of the automation could have been
216
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
associated with additional parameters than the weather, such as
other contextual information, e.g. the traffic situation.
Regarding the validity of the study here presented we would like
to highlight that during the design of the experiment we ruled out
extraneous variables that could affect our study on trust issues and
automation making as simple scenario as possible (e.g. avoiding
dense traffic or overtaking situations) guaranteeing, thus, internal
validity. An experiment has external validity (generalizability) if
the results are not unique to a particular set of circumstances, but
generalizable. We are confident that the large number of
participants in this study, as well as the selection criteria applied,
make the results presented generalizable.
7. CONCLUSIONS
This paper has reported on an empirical study performed together
with 59 drivers in a simulator experiment, were the effects of
displaying continuous support system uncertainty during an
automated driving scenario was evaluated. The results indicate
that drivers who were informed of the car uncertainty were better
prepared to switch to manual control when required than the
control group. Further, the control group showed tendencies of
automation bias, resulting in inappropriate calibrations of trust,
which is also in line with research presented by [13], where it was
concluded that people generally have positive expectations of
unfamiliar automated decision aids.
Future work will include additional data collection regarding the
participants in the study so as to associate the results with
information regarding the drivers’ driving style and their
perceived subjective locus of control. Future work will also
explore other driver-automation forms of collaboration. As
discussed by Inagaki [22], for the human-automation
collaboration to progress, the automation might need to
implement some control actions when it determines that the
human is in a condition where he/she is unable to give directives
to the automation, resulting in automated technologies that are
able to understand the human’s psychological and physiological
conditions, intentions and actions in relation to the situation. In a
driving scenario, such cooperation could be based on the
automated car’s understanding of the current status of the driver
(alert, sleeping, texting etc.) and adapt the level of automation and
the frequency of warnings accordingly.
The transition of control is also a topic which needs further
structuring and investigation [23]. According to Flemisch et al.
[11], there are many questions that need to be investigated
regarding the proper balance between the driver and the
automated systems, especial about the authority of the assistance
and automated systems in emergency situations. How to design
such driver-automation handovers must also be further explored.
APPENDIX
A1: Listed below are the questions for measuring trust answered
after the driving exercise:
Q1: I understand how the system works its goals,
actions and output
Q2: I would like to use the system if it was available in
my own car
Q3: I think that the actions of the system will have a
positive effect on my own driving
Q4: I put my faith in the system
Q5: I think that the system provides safety during
driving
Q6: I think that the system is reliable
Q7: I can trust the system
A2: Below follows the instructions given to the drivers provided
with the uncertainty information (the same instructions were given
to the control group, with the exception of the information
regarding the uncertainty representation):
You will first practice to drive the car manually in the
simulator for about 5 minutes. Thereafter, the test
session will begin, which runs for about 10 minutes and
is performed fully autonomously, that is, the automation
maneuvers the car as good as it can, based on, amongst
other variables, the prevailing sight conditions.
In the instrument cluster for autonomous driving the
following is displayed: the speed, the engine speed, the
current gear used, the outside temperature, the current
time, the fuel level, the position of the steering wheel
during the autonomous drive, and how confident the car
is about its ability to drive autonomously. The red arrow
indicates the limit for when the automation no longer
can maneuver the car.
To start the test session, put the gear in driving mode.
The car starts to drive autonomously when you press the
gas pedal, thereafter you can let go of the pedal. You
can take control over the car again if you so please at
any time by steering/braking/giving gas at any time.
ACKNOWLEDGMENTS
This research has been supported by the Swedish Knowledge
Foundation under grant 2010/0320 (UMIF), Vinnova through the
National Aviation Engineering Research Program (NFFP5-2009-
01315) and the University of Skövde. We would like to thank
Reetta Hallila, Emil Kullander and Sicheng Chen (Volvo Car
Corporation) for making the study possible and enjoyable! We
also direct our thanks to the participants in the study.
REFERENCES
[1] R. Parasuraman and V. Riley, "Humans and automation: Use,
misuse, disuse, abuse," Human Factors, vol. 39, pp. 230253,
1997.
[2] J. Lee and K. See, "Trust in automation: designing for
appropriate reliance," Human Factors: The Journal of the Human
Factors and Ergonomics Society, vol. 46, pp. 5080, 2004.
[3] N. B. Sarter, D. D. Woods, and C. E. Billings, "Automation
surprises," Handbook of human factors and ergonomics, vol. 2,
pp. 1926–1943, 1997.
217
Proceedings of the 5th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ‚13), October 28–30, 2013, Eindhoven, The Netherlands.
[4] C. Billings, Aviation Automation: The Search for a Human-
Centered Approach. Mahwah, New Jersey: Lawrence Erlbaum
Associates, 1997.
[5] M. R. Endsley, "Automation and situation awareness,"
Automation and Human Performance: Theory and Applications,
Parasuraman, R. and Mouloua, M., Eds. Mahwah, NJ: Erlbaum. ,
pp. 163181, 1996.
[6] R. Parasuraman and D. H. Manzey, "Complacency and Bias in
Human Use of Automation: An Attentional Integration," Human
Factors: The Journal of the Human Factors and Ergonomics
Society, vol. 52, pp. 381–410, 2010.
[7] J. McGuirl and N. Sarter, "Supporting Trust Calibration and
the Effective Use of Decision Aids by Presenting Dynamic
System Confidence Information," The Journal of the Human
Factors and Ergonomics Society, vol. 48, pp. 656–665, 2006.
[8] B. D. Seppelt and J. D. Lee, "Making adaptive cruise control
(ACC) limits visible," International Journal of Human-Comuter
Studies vol. 65, pp. 192205 2007.
[9] B. Rajaonah, F. Anceaux, and F. Vienne, "Trust and the use of
adaptive cruise control: a study of a cut-in situation," Cognition,
Technology & Work, vol. 8, pp. 146155, 2006.
[10] M. Beggiato and J. F. Krems, "The evolution of mental
model, trust and acceptance of adaptive cruise control in relation
to initial information," Transportation Research Part F: Traffic
Psychology and Behaviour, vol. 18, pp. 47–57, 2013.
[11] F. Flemisch, J. Kelsch, C. Löper, A. Schieben, and J.
Schindler, "Automation spectrum, inner/outer compatibility and
other potentially useful human factors concepts for assistance and
automation," in Human Factors for Assistance and Automation,
D. d. Waard, F. O. Flemisch, B. Lorenz, H. Oberheid, and K. A.
Brookhuis, Eds., ed Maastricht, the Netherlands: Shaker
Publishing, pp. 116, 2008.
[12] N. A. Stanton, M. Young, and B. McCaulder, "Drive-by-
wire: The case of driver workload and reclaiming control with
adaptive cruise control," Safety Science, vol. 27, pp. 149–159,
1997.
[13] M. Dzindolet, S. Peterson, R. Pomranky, L. Pierce, and H.
Beck, "The role of trust in automation reliance," International
Journal of Human-Computer Studies, vol. 58, pp. 697718, 2003.
[14] J. Beller, M. Heesen, and M. Vollrath, "Improving the
DriverAutomation Interaction: An Approach Using Automation
Uncertainty," Human Factors: The Journal of the Human Factors
and Ergonomics Society, 2013.
[15] F. M. F. Verberne, J. Ham, and C. J. H. Midden, "Trust in
smart systems: Sharing driving goals and giving information to
increase trustworthiness and acceptability of smart systems in
cars," Human Factors: The Journal of the Human Factors and
Ergonomics Society, vol. 54, pp. 799–810, 2012.
[16] R. Finger and A. M. Bisantz, "Utilizing graphical formats to
convey uncertainty in a decision making task," Theoretical Issues
in Ergonomics Science, vol. 3, pp. 125 1997.
[17] L. Wang, G. A. Jamieson, and J. G. Hollands, "Trust and
Reliance on an Automated Combat Identification System,"
Human Factors: The Journal of the Human Factors and
Ergonomics Society, vol. 51, pp. 281–291, 2009
[18] J. Y. Jian, A. M. Bisantz, and C. G. Drury, "Foundations for
an empirically determined scale of trust in automated systems,"
International Journal of Cognitive Ergonomics, vol. 4, pp. 5371,
2000.
[19] S. G. Klauer, T. A. Dingus, V. L. Neale, J. Sudweeks, and D.
Ramsey, "The impact of driver inattention on near-crash/crash
risk: An analysis using the 100-car naturalistic driving study
data," U.S. Department of Transportation Technical Report, 2006.
[20] J. Lee and N. Moray, "Trust, control strategies and allocation
of function in human-machine systems.," Ergonomics, vol. 35, pp.
12431270, 1992.
[21] J. Lee and N. Moray, "Trust, self-confidence, and operators'
adaptation to automation," International Journal of Human-
Computer Studies, vol. 40, pp. 153184, 1994.
[22] T. Inagaki, "Smart collaboration between humans and
machines based on mutual understanding," Annual Reviews in
Control, vol. 32, pp. 253–261, 2008.
[23] A. Schieben, T. Gerald, F. Köster, and F. Flemisch, "How to
interact with a highly automated vechible. Generic interaction
design schemes and test results of a usability assessment " in
Human Centred Automation, D. de Waard, N. Gérard, L.
Omnasch, R. Wiczoreck, and D. H. Manzey, Eds., ed: Shaker
Publishing, pp. 251267, 2011.
... This paper evaluates three interventions designed to help the fallback-ready user maintain and balance situation awareness while attending to an NDRA on a head-up display (HUD) during automated driving. Specifically, the design interventions build on research suggesting the benefits of sharing the uncertainty of the automated driving system to the user while they engage in an NDRA [2,18,30,35] Uncertainty represents the automation's reliability or confidence in mastering the current driving situation. Consequently, uncertainty also implies the likelihood of the automated driving system potentially failing or reaching a limitation of its designed functions. ...
... This increased trust in and acceptance of the automation in their driving simulator study. Helldin et al. [18] have observed the opposite in lower trust ratings. That said, they argue that their display-an uncertainty visualisation with a graphical representation of the automation's ability in 7 bars-actually leads to an overall better trust calibration even if trust is lowered. ...
... Their recommendation to use colour changing, hue and animation informed some of our intervention design (see Intervention Design). In addition, we incorporated Helldin et al. 's design of increasing bars that suggested better trust calibration [18]. ...
Conference Paper
Full-text available
This paper reports results from a high-fidelity driving simulator study (N=215) about a head-up display (HUD) that conveys a conditional automated vehicle’s dynamic “uncertainty” about the current situation while fallback drivers watch entertaining videos.We compared (between-group) three design interventions: display (a bar visualisation of uncertainty close to the video), interruption (interrupting the video during uncertain situations), and combination (a combination of both), against a baseline (video-only).We visualised eye-tracking data to conduct a heatmap analysis of the four groups’ gaze behaviour over time. We found interruptions initiated a phase during which participants interleaved their attention between monitoring and entertainment. This improved monitoring behaviour was more pronounced in combination compared to interruption, suggesting pre-warning interruptions have positive effects. The same addition had negative effects without interruptions (comparing baseline & display). Intermittent interruptions may have safety benefits over placing additional peripheral displays without compromising usability.
... To alleviate overtrust, recent studies have mainly explored two trust-dampening strategies: (1) confidence score; and (2) trust calibration cue. The confidence score represents the chances that robots perform correctly Helldin et al. 2013;Zhang et al. 2020). It is still impossible to guarantee the perfection of robots because their actions correspond to probabilistic models. ...
... Therefore, the confidence score provides information about uncertainty for human users to avoid overtrust. For example, in the study investigating the trust calibration in a driving task, Helldin et al. (2013) mentioned that providing the confidence score of a car automation system (i.e., whether automation is reliable under the current condition) was beneficial for drivers to appropriately calibrate their trust. ...
Article
Full-text available
With the construction sector primed to incorporate such advanced technologies as artificial intelligence (AI), robots, and machines, these advanced tools will require a deep understanding of human-robot trust dynamics to support safety and productivity. Although other disciplines have broadly investigated human trust-building with robots, the discussion within the construction domain is still nascent, raising concerns because construction workers are increasingly expected to work alongside robots or cobots, and to communicate and interact with drones. Without a better understanding of how construction workers can appropriately develop and calibrate their trust in their robotic counterparts, the implementation of advanced technologies may raise safety and productivity issues within these already-hazardous jobsites. Consequently, this study conducted a systematic review of the human-robot trust literature to (1) understand human-robot trust-building in construction and other domains; and (2) establish a roadmap for investigating and fostering worker-robot trust in the construction industry. The proposed worker-robot trust-building roadmap includes three phases: static trust based on the factors related to workers, robots, and construction sites; dynamic trust understood via measuring, modeling, and interpreting real-time trust behaviors ; and adaptive trust, wherein adaptive calibration strategies and adaptive training facilitate appropriate trust-building. This roadmap sheds light on a progressive procedure to uncover the appropriate trust-building between workers and robots in the construction industry.
... Transparency and explanations are often interconnected. A common approach to promote transparency is showing uncertainties [HFRD13,ZLB20,BSO15] of the AI. For instance, Zhang et al. [ZLB20] studied the effect of showing confidence score and local explanation for predictions in an income prediction task. ...
Article
Full-text available
The increasing integration of artificial intelligence (ai) in visual analytics (va) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite the ai's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness of ai‐guided va tools.
... Previous research has suggested that providing detailed information about autonomous driving situations may not effectively increase trust [38]. However, in this study, participants were able to understand the reasons behind the decisions made by the autonomous driving system through the scenario-based explanations. ...
Article
Full-text available
Automated vehicles (AVs) are recognized as one of the most effective measures to realize sustainable transport. These vehicles can reduce emissions and environmental pollution, enhance accessibility, improve safety, and produce economic benefits through congestion reduction and cost savings. However, the consumer acceptance of and trust in these vehicles are not ideal, which affects the diffusion speed of AVs on the market. Providing transparent explanations of AV behaviour is a method for building confidence and trust in AV technologies. In this study, we investigated the explainability of user interface information in an Automated Valet Parking (AVP) system—one of the first L4 automated driving systems with a large commercial landing. Specifically, we proposed a scenario-based explanation framework based on explainable AI and examined the effects of these explanations on drivers’ objective and subjective performance. The results of Experiment 1 indicated that the scenario-based explanations effectively improved drivers’ situational trust and user experience (UX), thereby enhancing the perception and understanding that drivers had of the system’s intelligence capabilities. These explanations significantly reduced the mental workload and elevated the user performance in objective evaluations. In Experiment 2, we uncovered distinct explainability preferences among new and frequent users. New users sought increased trust and transparency, benefiting from guided explanations. In contrast, frequent users emphasised efficiency and driving safety. The final experimental results confirmed that solutions customised for different segments of the population are significantly more effective, satisfying, and trustworthy than generic solutions. These findings demonstrate that the explanations for individual differences, based on our proposed scenario-based framework, have significant implications for the adoption and sustainability of AVs.
... This paper considers the situation of calibrating reliance by explicitly communicating an AI's capability through RCCs. Previous studies have mainly focused on providing RCCs continuously [13], [14]. McGuirl & Sarter compared the effect of presenting dynamic system confidence with overall reliability only and found that the former can improve trust calibration [15]. ...
Article
Full-text available
Understanding what an AI system can and cannot do is necessary for end-users to use the AI properly without being over- or under-reliant on it. Reliance calibration cues (RCCs) communicate an AI’s capability to users, resulting in optimizing their reliance on it. Previous studies have typically focused on continuously presenting RCCs, and although providing an excessive amount of RCCs is sometimes problematic, limited consideration has been given to the question of how an AI can selectively provide RCCs. This paper proposes vPred-RC, an algorithm in which an AI decides whether to provide an RCC and which RCC to provide. It evaluates the influence of an RCC on user reliance with a cognitive model that predicts whether a human will assign a task to an AI agent with or without an RCC. We tested vPred-RC in a human-AI collaborative task called the collaborative CAPTCHA (CC) task. First, our reliance prediction model was trained on a dataset of human task assignments for the CC task and found to achieve 83.5% accuracy. We further evaluated vPred-RC’s dynamic RCC selection in a user study. As a result, the RCCs selected by vPred-RC enabled participants to more accurately assign tasks to an AI when and only when the AI succeeded compared with randomly selected ones, suggesting that vPred-RC can successfully calibrate human reliance with a reduced number of RCCs. The selective presentation of RCCs has the potential to enhance the efficiency of collaboration between humans and AIs with fewer communication costs.
... Additionally, SA and trust were higher with the visualization. Helldin et al. [41] used bars to indicate certainty. They also found quicker control takeovers. ...
Conference Paper
Full-text available
Automated vehicles are expected to improve safety, mobility, and inclusion. User acceptance is required for the successful introduction of this technology. One essential prerequisite for acceptance is appropriately trusting the vehicle's capabilities. System transparency via visualizing internal information could calibrate this trust by enabling the surveillance of the vehicle's detection and prediction capabilities, including its failures. Additionally, concurrently increased situation awareness could improve take-overs in case of emergency. This work reports the results of two online comparative video-based studies on visualizing prediction and maneuver-planning information. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N=280) and state-of-the-art road user prediction and maneuver planning on a pre-recorded real-world video using a real prototype (N=238). Results show that color conveys uncertainty best, that the planned trajectory increased trust, and that the visualization of other predicted trajectories improved perceived safety.
Article
AI is the technology that builds intelligent machines able to perform tasks that generally need human intelligence in machines that are programmed to think and act like humans. AI has penetrated many organization processes, resulting in a growing fear that intelligent machines will soon replace many humans in decision-making. To provide a more proactive and pragmatic perspective, this article highlights the complementarities of humans and AI. It examines how each can strengthen organizational decision-making processes typically characterized by uncertain, complex and equal vocalists. With excellent computation information processing capacity and an analytical approach, AI can extend human cognition when addressing complexity. In contrast, humans can still offer a more holistic, intuitive approach in dealing with uncertainly and equal vocalist in organizational decisionmaking.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A 3-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study, was performed to better understand similarities and differences in the concepts of trust and distrust, and among the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Article
Full-text available
We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems. Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems. In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured. ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs. As when trusting other humans, trusting smart systems depends on those systems sharing the user's goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information. For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user.
Article
Previous studies have shown adaptive cruise control (ACC) can compromise driving safety when drivers do not understand how the ACC functions, suggesting that drivers need to be informed about the capabilities of this technology. This study applies ecological interface design (EID) to create a visual representation of ACC behavior, which is intended to promote appropriate reliance and support effective transitions between manual and ACC control. The EID display reveals the behavior of ACC in terms of time headway (THW), time to collision (TTC), and range rate. This graphical representation uses emergent features that signal the state of the ACC. Two failure modes-exceedance of braking algorithm limits and sensor failures-were introduced in the driving contexts of traffic and rain, respectively. A medium-fidelity driving simulator was used to evaluate the effect of automation (manual, ACC control), and display (EID, no display) on ACC reliance, brake response, and driver intervention strategies. Drivers in traffic conditions relied more appropriately on ACC when the EID display was present than when it was not, proactively disengaging the ACC. The EID display promoted faster and more consistent braking responses when braking algorithm limits were exceeded, resulting in safe following distances and no collisions. In manual control, the EID display aided THW maintenance in both rain and traffic conditions, reducing the demands of driving and promoting more consistent and less variable car-following performance. These results suggest that providing drivers with continuous information about the state of the automation is a promising alternative to the more common approach of providing imminent crash warnings when it fails. Informing drivers may be more effective than warning drivers.
Article
The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.
Article
Understanding how to display effectively uncertain information has become increasingly important as decision aids can provide operators with situational estimates and their associated uncertainty. The paper describes two studies in which degraded or blended icons were used to convey uncertainty regarding the identity of a radar contact as hostile or friendly. A classification study first showed that participants could sort, order and rank icons from five sets intended to represent different levels of uncertainty. Three icon sets were selected for further study in an experiment in which participants had to identify the status of contacts as either hostile or friendly. Contacts and probabilistic estimates of their identities were depicted on a simulated radar screen in one of three ways: with degraded icons and probabilities, with non-degraded icons and probabilities and with degraded icons only. Results showed that participants using displays with only degraded icons performed better on some measures and as well on other measures, than the other tested conditions. These results are significant because they indicate both that people can understand uncertainty conveyed through such a manner and thus that the use of distorted or degraded images may be a viable alternative to convey situational uncertainty.
Article
Adaptive cruise control (ACC) automates vehicle speed and distance control. Due to sensor limitations, not every situation can be handled by the system and, therefore, driver intervention is required. Trust, acceptance and mental model of system functionality are considered key variables for appropriate system use. This study systematically investigates the effect of divergent initial mental models of ACC (i.e., varying according to correctness) on trust, acceptance and mental model evolvement. A longitudinal driving simulator study was conducted, using a two-way (3 × 3) repeated measures mixed design with a matched sample of 51 subjects. Three experimental groups received (1) a correct ACC description, (2) an incomplete and idealised account omitting potential problems, and (3) an incorrect description including non-occurring problems. All subjects drove a 56-km track of highway with an identical ACC system, three times, and within a period of 6 weeks. After using the system, participants’ mental model of ACC converged towards the profile of the correct group. Non-experienced problems tended to disappear from the mental model network when they were not activated by experience. Trust and acceptance grew steadily for the correct condition. The same trend was observed for the group with non-occurring problems, starting from a lower initial level. Omitted problems in the incomplete group led to a constant decrease in trust and acceptance without recovery. This indicates that automation failures do not negatively affect trust and acceptance if they are known beforehand. A strategy reliant upon trial-and-error alone is considered insufficient for developing an appropriate trust, acceptance and mental model. Implications on information and learning strategies are discussed.