Conference PaperPDF Available

The Impact of Autonomous Vehicles’ Active Feedback on Trust

Authors:

Abstract and Figures

The successful introduction of self-driving technology may depend on the ability of the vehicles’ human-machine interface to convey trust to the vehicle occupants. Using a driving simulator, in this experiment we aimed to evaluate drivers’ trust on an autonomous system, depending on the feedback the vehicle provided by an assistive cluster’s interface. Forty participants were divided into three groups regarding levels of feedback: (a) cluster without feedback (N = 13); (b) cluster with feedback regarding the surrounding vehicles (N = 14); (c) cluster with feedback regarding the surrounding vehicles and the vehicle’s own decisions (N = 13). For all groups, a visual search task was introduced as an indirect indicator of trust in the autonomous system. Results showed an inverse relation between available feedback and correct answers. The system was evaluated as trustable and safe by all groups. Overall, the results may contribute to design requirements for future vehicle HMIs, as they indicate that more information does not necessarily convey more trust.
Content may be subject to copyright.
The Impact of Autonomous VehiclesActive
Feedback on Trust
Ana Mackay
1(&)
,Inês Fortes
1
, Catarina Santos
1
,Dário Machado
2
,
Patrícia Barbosa
1
, Vera Vilas Boas
3
,João Pedro Ferreira
3
,
Nélson Costa
1,4
, Carlos Silva
2
, and Emanuel Sousa
1,2
1
ALGORITMI Research Centre, School of Engineering,
University of Minho, Guimarães, Portugal
anamackay@gmail.com, ines.fortes@gmail.com,
pmscnb@gmail.com, cfssantos_13@hotmail.com,
2
Center for Computer Graphics, Guimarães, Portugal
drmachado@gmail.com,
{carlos.silva,Emanuel.sousa}@ccg.pt
3
Bosch Car Multimedia, Braga, Portugal
{vera.vilasboas,joao.ferreira5}@pt.bosch.com
4
Department of Production and Systems, School of Engineering,
University of Minho, Guimarães, Portugal
ncosta@dps.uminho.pt
Abstract. The successful introduction of self-driving technology may depend
on the ability of the vehicleshuman-machine interface to convey trust to the
vehicle occupants. Using a driving simulator, in this experiment we aimed to
evaluate driverstrust on an autonomous system, depending on the feedback the
vehicle provided by an assistive clusters interface. Forty participants were
divided into three groups regarding levels of feedback: (a) cluster without
feedback (N = 13); (b) cluster with feedback regarding the surrounding vehicles
(N = 14); (c) cluster with feedback regarding the surrounding vehicles and the
vehicles own decisions (N = 13). For all groups, a visual search task was
introduced as an indirect indicator of trust in the autonomous system. Results
showed an inverse relation between available feedback and correct answers. The
system was evaluated as trustable and safe by all groups. Overall, the results
may contribute to design requirements for future vehicle HMIs, as they indicate
that more information does not necessarily convey more trust.
Keywords: Autonomous driving Trust Clusters feedback
Visual search task
1 Introduction
The introduction of autonomous driving technology may fundamentally change the
role of the driver. Freed from driving responsibilities, he/she may be allowed to engage
in leisure or working related tasks that were previously deemed as incompatible with
driving. However, for this to happen vehicle occupants must trust that the autonomous
system is aware of surrounding events and is deciding its course of action accordingly.
©Springer Nature Switzerland AG 2020
P. M. Arezes (Ed.): AHFE 2019, AISC 969, pp. 342352, 2020.
https://doi.org/10.1007/978-3-030-20497-6_32
anamackay@gmail.com
A seemingly obvious way of conveying such information is through the
Human-Machine Interfaces (HMIs), but deciding on exactly what type and amount of
information and vehicle performance/decision feedback should be presented is a
leading question in autonomous driving (AD) transportation research. More speci-
cally, which information concerning the vehicles current state should be presented to
facilitate trust in the system?
Several studies have shown that informing users about the capabilities and limi-
tations of the autonomous system, as well as continuously communicating its current
state promotes a safer and more appropriate use of the system (Hoffman et al. 2013).
One of the rst experiments that aimed to study whether communicating automation
uncertainty improves driverautomation interaction was developed by Beller et al.
(2013). They showed a face-like icon expressing uncertainty whenever the autonomous
system had ambiguous or incomplete sensory information. They concluded that,
compared with a group that had no uncertainty information, (a) the time to collision
increased in case of a system failure, (b) situation awareness increased, and (c) trust and
acceptance in the system increased.
The results from Beller et al. (2013), suggest that presenting information regarding
the uncertainty of the system may be useful. However, in their study, there were only
two levels of uncertainty under testing: uncertainty and no uncertainty. Helldin et al.
(2013), also addressed the display of uncertainty by the vehicle but investigated the
effect of communicating it as a continuous variable. The experimental group saw a
gure in the cluster indicating, on a scale from 1 to 7, how certain the car was that it
could handle the situation autonomously. When the condence level was 2 or less,
automation could not be guaranteed. The control group had no information regarding
the current status of the car. The results showed that the drivers of the experimental
group were quicker to take-over, looked more often away from the road, and even
though the difference was small, had less trust in the system. In this case, the behavior
of the experimental group (trusting less) revealed to be more appropriate as the vehicle
sometimes asked for manual take-over. Moreover, participants in this group were more
efcient in responding to take-over requests.
Even if the results from the previous studies might indicate that presenting status
information could be useful, it is not clear whether it is enough to inform about the
status of the vehicle, or it is better to also inform why the vehicle is doing what it is
doing. Regarding this question, Koo et al. (2015), investigated whether the content of
verbal messages stating the vehicles autonomous actions affected the drivers trust in
the system. For that, they conducted a driving simulator experiment where a semi-
autonomous system prevented frontal collisions by activation of an automated brake
function. They were asked to drive, and whenever an unexpected risk situation
appeared on the course, a voice warning and/or auto braking was activated. There were
four conditions concerning the message content: (1) howthe car is acting (e.g., Car
is braking); (2) whyit is acting the way it is (e.g., Obstacle ahead); (3) how +
whymessage (e.g., Car is braking due to obstacle ahead); and (4) no message. The
howcondition yielded the poorest results on driving performance, and the whywas
the most preferred by the drivers. Combining the howand whymessages resulted in
the safest driving performance measured by number of road edge excursions, but also
led to more negative emotional states (these were inferred by asking the participants
The Impact of Autonomous VehiclesActive Feedback on Trust 343
anamackay@gmail.com
how much they felt anxious,annoyed, and frustratedwhile driving).
Overall, drivers who received some type of information about the driving environment
expressed higher system trust and acceptance when compared with those who received
no message.
In sum, informing users regarding the systems status seems to be benecial, both
in terms of trust and performance. One way to convey that information is by means of
displays. However, it is yet not clear which type of information users need and nd
useful. The present study aimed to explore the role of visual feedback cues that may
affect trust in the interaction process between the user and the autonomous vehicles
interface. Trust was measured by objective and subjective metrics. Many physiological
methods are already used as indirect indications of a drivers trust in the vehicle, and in
this study, heart rate was measured. In addition, a study by Llaneras and Green (2013)
found that increased trust could lead drivers to allocate less visual attention to the road
ahead. Therefore, a visual search task was introduced without previous notice, and
prompted on a display inside the vehicle away from the typical central area of the road
gaze. Task performance was analyzed as an indicator of trust in the autonomous
system. Finally, a trust questionnaire was used.
2 Method
2.1 Participants
Forty participants with a drivers license were recruited to take part on this experiment,
ten of which were female, with a mean age of 29 years old (SD = 9.11). The mean
years of driving experience was 10 years (SD = 8.41).
2.2 Apparatus, Materials and Setup
The experiment was conducted in a xed-base Driving Simulator Mockup (DSM),
composed of two seats, a steering wheel, two pedals, and three monitors for rear view
projection. The DSM is connected with the simulation software (SILAB v.5.0, WIVW
2018) which controls the simulation environment. The frontal visualization was dis-
played in a curved screen with 5-m width by 2-m height, mounted in a metal structure.
Along this curved surface three projectors (1920 1080 pixels each) displayed the
simulation environment in the curved surface. A Head-Up Display (HUD) was installed
and the Assistive Cluster (AC) was mounted on the right side of the steering wheel, at
approximately 75 cm from the drivershead. A touchscreen display on the lower
dashboard was placed within reach to prompt the visual search task (Fig. 1).
Heart Rate (HR) was measured using BIOPACs MP160 data acquisition system
that can record data at a frequency of 2000 Hz and is coupled with the AcqKnowledge
5.0 data analysis software. Three pre-gelled electrodes were applied to the skin, on the
participants right clavicle (negative electrode), left clavicle (positive electrode) and left
lowest rib (ground electrode).
344 A. Mackay et al.
anamackay@gmail.com
3 Procedure
The participants were invited to an autonomous driving experience, and upon arrival,
they read and signed an informed consent form. Then, they were randomly assigned to
one of three groups: the group without feedback (No Feedback Group, N = 13), the
group with feedback regarding the surrounding vehicles (Sensors Group, N = 14), and
the group with feedback regarding the surrounding vehicles and the vehicles own
decisions (Decision group, N = 13). The No Feedback group had a cluster with no
information (Fig. 2a); the Sensors Group had information regarding the proximity of
surrounding vehicles: lateral and longitudinal control lines turned yellow as other
objects got closer - e.g., while in a queue or during an overtake (Fig. 2b); and the
Decision group had full feedback: the same sensor information as the Sensors group
and also arrows that informed the driver of the vehicles immediate future behaviour
regarding lane changes (Fig. 2c).
Fig. 1. DSM: (a) Head-up display (orange dashed outline); (b) Assistive cluster interface (blue
solid outline); (c) Touchscreen display (green dotted outline).
Fig. 2. Clusters level of feedback: (a) No feedback; (b) Sensors feedback; (c) Sensors and
Decision feedback.
The Impact of Autonomous VehiclesActive Feedback on Trust 345
anamackay@gmail.com
All clusters had an illustration of the vehicle at the centre, and an automation bar
below that represented the driving mode within four possible autonomy levels. For the
purpose of this study, the AD mode level 4 represented by a full green automation
bar, was presented to all participants during the experiment. The HUD (see Fig. 1a) had
a speedometer at the centre and an AD icon, as shown in Fig. 3.
3.1 Route Description
For an overview of the experiment see Fig. 4. The scenario was composed of a 12-min
drive in a highway during which several events occurred such as a free trafc segment,
an overtaking and a trafc queue that required the system to brake. While on AD mode,
a visual search task was prompted twice, 3 and 6 min after the beginning of the
experiment (t1 and t2), with no previous notice. Finally, the car stopped at a gas station
for recharge and the experimenter asked the participant to ll a trust questionnaire.
4 Measures
4.1 Visual Search Task
In the search arrows task (e.g., Engström et al. 2005), the participant had to search for
an upward facing arrow in a grid of equal but differently oriented arrows by pressing a
Yesbutton (if present) or No(if absent). Figure 5shows an example of an arrow
grid, with a Yesas a correct response for this trial.
Fig. 3. Example of the centre of the Head-Up Display (HUD) with AD mode engaged.
Fig. 4. Scheme of the experiment.
346 A. Mackay et al.
anamackay@gmail.com
While the car was on AD, this task was presented twice (visual search task t1 and
t2). Each time the task was presented, it was composed of three trials/images, forcing
the participants to deviate their attention from the road or the environment. Each trial
started with a xation cross displayed for 500 ms, followed by the arrows grid and
ended either with the drivers response or after a 6-s presentation without response.
After the third trial, the touchscreen was turned off until the second evaluation. The
participants performance was measured based on the percentage of correct/incorrect
and/or missing answers. As higher levels of trust are associated with lower monitoring
frequencies (Hergeth et al. 2016), the best performance was expected from the Decision
group. As this group has been given more information regarding the vehicles beha-
viour, it was hypothesized a higher level of trust in the system and a better performance
on the visual search task.
4.2 Heart Rate
It was expected that the more the clusters feedback, the higher the trust, and conse-
quently, the lower the heart rate. It has been shown that the use of a simulated
autonomous vehicle can increase stress (Morris et al. 2017), and heart rate is a fre-
quently used physiological variable that reects cognitive stress of subjects (e.g.,
Reimer and Mehler 2011). Therefore, it was hypothesized that the No Feedback group,
due to the lack of information which is a factor that triggers an anxiety state (Lee et al.
2016), showed the highest heart rate.
4.3 Trust Questionnaire
To evaluate trust, a custom-made questionnaire was presented at the end of the
experiment. The participant was asked to classify each sentence on a scale from 0
(totally disagree) to 5 (completely agree):
The autonomous driving system is trustable;
The autonomous driving system is safe;
I understood the intentions of the autonomous driving system;
I understood the actions of the autonomous driving system.
It was expected that, as the clusters feedback increased, the reported trust in the
system also increased.
Fig. 5. Example of an image for the visual search task.
The Impact of Autonomous VehiclesActive Feedback on Trust 347
anamackay@gmail.com
5 Results and Discussion
5.1 Heart Rate
For each participant, the mean heart rate was calculated in the 5 s immediately before
the presentation of the visual search task and in the 5 s immediately after the visual
search task has ended. To test whether heart rate increased when the visual task was
introduced, the heart rate values before the rst and the second visual task presentation
were averaged (M = 78.48 beats/min), and the same was done for the values after the
tasks (M = 85.96 beats/min). Reimer and Mehler (2011) found that heart rate and skin
conductance levels were lower in a driving simulator than under actual on-road driving,
but that the relative increases in these measures across cognitive tasks of increasing
difculty were equivalent.
A mixed Analysis of Variance (ANOVA) with the amount of information (3 levels:
no feedback, sensors feedback, and decision feedback) as the between-subject factor
and the moment relative to the visual search task (2 levels: before and after) as the
within-subject factor was conducted. The increase in heart rate with the occurrence of a
visual search task was statistically signicant, F(1, 36) = 19.09, p = .0001, g2
p= 0.35.
However, no signicant differences were found between the three levels of information,
F(2, 36) = 0.06, p = .94, g2
p= 0.004, meaning that heart rate was similar across the
different levels of information.
5.2 Visual Search Task
Figure 6shows the percentage of missing answers to the visual search tasks according
to the amount of available information in the assistive cluster, in the rst moment (t1,
left panel) and in the second moment (t2, right panel).
Both in t1 and in t2 the percentage of missing answers was lowest for the No
Feedback group, intermediate for the Sensors group and highest for the Decision group.
Fig. 6. Average percentage of missing answers to the visual search tasks according to the
amount of available information in the assistive cluster in each moment of presentation (t1, t2).
348 A. Mackay et al.
anamackay@gmail.com
A mixed ANOVA with amount of information (3 levels) and the instance of the
visual search task (2 levels: t1 and t2), in the number of missing answers to the visual
search task was conducted. Signicant differences were found for the information level,
F(2, 37) = 8.18, p = .001, g2
p= 0.31, and for the instance of the visual search task,
F(1, 37) = 9.70, p < .01, g2
p= 0.21. A post-hoc analysis of the information level using
least-squares means with Bonferroni corrections, revealed that the Decision group had
signicantly more missing answers than the No Feedback group, t = 4.01, p < .01, and
then the Sensors group, t = 2.47, p = .05. These results could indicate that a more
complex cluster may be more distracting. The cause for that distraction is not clear:
participants may be looking at the information cluster (because it was showing almost
continuous information) or the environment (e.g., to compare their knowledge of the
environment with the clusters information) instead of looking to the visual search task.
Concerning the instance of the visual search task, there were fewer missing answers in
the second presentation of the task, which was expected since the element of surprise
was removed with the rst task presentation, so participants were more aware that the
visual search task could be shown at any time.
It seems that having more information on the cluster has an inuence on the
missing answers to the visual task. However, when participants do answer the task,
does the percentage of correct answers differ across groups? Figure 7shows the per-
centage of correct/incorrect answers to the visual search task, calculated after excluding
the missing answers. In this analysis, 6 participants were eliminated because they either
failed to respond to all three trials in the rst task (N = 2, all from Sensors group), to all
three trials in the second task (N = 1, from Decision group), or to all trials, in both tasks
(N = 3, all from Decision group).
From the results, it seems the percentage of correct and incorrect answers was very
similar across groups. A mixed ANOVA with amount of information (3 levels) and the
instance of the visual search task (2 levels: t1 and t2) was conducted regarding the
percentage of correct answers (excluding missings). No signicant differences were
Fig. 7. Percentage of correct and incorrect answers to the visual search tasks according to the
amount of available information in the assistive cluster in each moment of presentation (t1, t2).
The Impact of Autonomous VehiclesActive Feedback on Trust 349
anamackay@gmail.com
found for the information level, F(2, 31) = 0.14, p = .87, g2
p= 0.009. Conversely, a
signicant effect was found for the instance of the visual search task, F(1, 31) = 6.10,
p = .02, g2
p= 0.16, with the percentage of correct answers increasing considerably in
the second appearance of the task (M
t1
= 82.4% vs. M
t2
= 93.1%). Although the
number of trials was small, this increase in performance may reect the effect of training.
5.3 Trust Questionnaire
Figure 8shows the answers of the three groups to the four questions concerning the
trust on the system. As depicted in Fig. 8, the overall system was perceived by all
groups as trustable and safe.
Generally, participants agreed with all the questions. The lowest mean score was
3.8 and it was obtained for the Decision group in the questions regarding the auton-
omous system being trustable and safe (rst two questions in Fig. 8). The intentions
and actions of the autonomous driving system (last two questions in Fig. 8) were well
understood, as the mean score rate for both items was higher than 4 for all groups.
Although Young et al. (2015) argue that subjective measures, like self-reports, are
rather complicated, Schmidt et al. (2017) report to have successfully used verbal
assessment of driverscondition regarding perceived sleepiness and cooling sensations.
Looking at Fig. 8, there seems to be a tendency for perceiving the system as more
trustable and safe (rst and second questions in Fig. 8) the less information it has. Also,
the comprehension of systems intention and actions (third and fourth questions in
Fig. 8) seem to be rated with lower scores as the clusters feedback increases on
complexity, with the worst scores in the Decision group.
One-way ANOVAs were conducted to analyse differences between the three
information levels for each one of the questionnaire items. The differences were
Fig. 8. Mean scores of the Trust questionnaire. The score 0 means totally disagreeand score 5
means completely agreewith the sentence.
350 A. Mackay et al.
anamackay@gmail.com
non-signicant for all ANOVAs, except for the item I understood the actions of the
autonomous driving system, F(2, 37) = 3.36, p = 0.05, g2
p= 0.15, were the result was
partially signicant. Post hoc analysis indicate that the participants from the Decision
group performed worse than the No Feedback group, t = 2.321, p = 0.08.
6 Conclusion
Self-driving technology has the potential to profoundly change how we use automo-
biles. However, generalized adoption will strongly depend on the perception of trust
experienced by the users (Choi and Ji 2015). Given that the HMI is one of the main
sources of information for the vehicle occupants, it may play a key role in increasing
system transparency, a factor known to affect trust (Choi and Ji 2015), thus inuencing
acceptance.
In this study, we investigated how conveying feedback regarding perception of
surrounding driving environment and from the autonomous systems decisions may
affect trust. Results of our study do not point clearly to a direct relation between the
available feedback information and the amount of trust assessed by the questionnaire.
This may be due to the particular design of the interface, which provided only limited
information regarding the location of surrounding vehicles and of the vehicles own
course of action. It may be that more explicit and complete information is required to
affect trust. For instance, in a recent work by Haeuslschmid et al. (2017) which also
compared different feedback visualizations a world in miniatureconcept (inspired on
the one used in Tesla vehicles) was the most effective in conveying trust and sense of
safety. Our results also showed an inverse relation between available feedback and
performance on a visual search task. That may imply that the more complex assistive
cluster created the greatest cognitive load, leading to the worst task performance.
In conclusion, more information does not necessarily lead to more trust and may in
fact negatively affect cognitive load. These results point to the need of investigating
which types of feedback are more appropriate and how the particular design choices for
the visual HMI may affect trust and inuence cognitive load. Other ways of conveying
information to the user should also be studied. For instance, this experiment focused on
visual feedback, but the use of other sensory modalities, either in combination with the
visual cues or by themselves, should also be explored. Other approach to convey trust
on autonomous vehicles may be by anthropomorphizing them, by for example pro-
viding it a voice and simulating intelligent conversation (Ruijten et al. 2018).
Despite a different cluster feedback design or other modalities that could be
implemented to transmit a feeling of safety and trust to the user, this study calls for the
need to test different approaches on active feedback of autonomous driving systems.
Acknowledgments. This work has been supported by FCT Fundação para a Ciência e Tec-
nologia the scope of the Project: UID/CEC/00319/2019 and by: European Structural and
Investment Funds in the FEDER component, through the Operational Competitiveness and
Internationalization Programme (COMPETE 2020) [Project no. 039334; Funding Reference:
POCI-01-0247-FEDER-039334].
The Impact of Autonomous VehiclesActive Feedback on Trust 351
anamackay@gmail.com
References
Beller, J., Heesen, M., Vollrath, M.: Improving the driver-automation interaction: an approach
using automation uncertainty. Hum. Factors 55(6), 11301141 (2013). https://doi.org/10.
1177/0018720813482327
Choi, J.K., Ji, Y.G.: Investigating the importance of trust on adopting an autonomous vehicle. Int.
J. Hum.-Comput. Interact. 31(10), 692702 (2015). https://doi.org/10.1080/10447318.2015.
1070549
Engström, J., Johansson, E., Östlund, J.: Effects of visual and cognitive load in real and simulated
motorway driving. Transp. Res. Part F Trafc Psychol. Behav. 8(2 SPEC. ISS.), 97120.
https://doi.org/10.1016/j.trf.2005.04.012
Haeuslschmid, R., von Buelow, M., Peging, B., Butz, A.: Supporting trust in autonomous
driving. In Proceedings of the 22nd International Conference on Intelligent User Interfaces -
IUI 2017, pp. 319329 (2017). https://doi.org/10.1145/3025171.3025198
Helldin, T., Falkman, G., Riveiro, M., Davidsson, S.: Presenting system uncertainty in
automotive UIs for supporting trust calibration in autonomous driving. In: Proceedings of
International Conference on Automotive User Interfaces Interactive Vehicles, vol. 5, pp. 210
217 (2013). https://doi.org/10.1111/j.1600-0714.1976.tb01762.x
Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F.: Keep your scanners peeled: gaze behavior as a
measure of automation trust during highly automated driving. Hum. Factors J. Hum. Factors
Ergon. Soc. 58(3), 509519 (2016). https://doi.org/10.1177/0018720815625744
Hoffman, R.R., Johnson, M., Bradshaw, J.M., Underbrink, A.: Trust in automation. IEEE Intell.
Syst. 28(1), 8488 (2013). https://doi.org/10.1109/MIS.2013.24
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., Nass, C.: Why did my car just do that?
Explaining semi-autonomous driving actions to improve driver understanding, trust, and
performance. Int. J. Interact. Des. Manuf. (IJIDeM) 9(4), 269275 (2015). https://doi.org/10.
1007/s12008-014-0227-2
Lee, J., Kim, N., Imm, C., Kim, B., Yi, K., Kim, J.: A question of trust: an ethnographic study of
automated cars on real roads. In: Proceedings of the 8th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications Automotive UI 2016,
pp. 201208 (2016). https://doi.org/10.1145/3003715.3005405
Llaneras, R.E., Green, C.A.: Human factors issues associated with limited ability autonomous
driving systems : driversallocation of visual attention to the forward roadway, pp. 9298
(2013)
Morris, D.M., Erno, J.M., Pilcher, J.J.: Electrodermal response and automation trust during
simulated self-driving car use. In: Proceedings of the Human Factors and Ergonomics
Society, pp. 17591762 (2017). https://doi.org/10.1177/1541931213601921
Reimer, B., Mehler, B.: The impact of cognitive workload on physiological arousal in young
adult drivers: a eld study and simulation validation. Ergonomics 54(10), 932942 (2011).
https://doi.org/10.1080/00140139.2011.604431
Ruijten, P.A.M., Terken, J.M.B., Chandramouli, S.N.: Enhancing Trust in Autonomous Vehicles
through Intelligent User Interfaces That Mimic Human Behavior (2018). https://doi.org/10.
3390/mti2040062
Schmidt, E., Decke, R., Rasshofer, R., Bullinger, A.C.: Psychophysiological responses to short-
term cooling during a simulated monotonous driving task. Appl. Ergon. 62,918 (2017).
http://doi.org/10.1016/j.apergo.2017.01.017
Young, M.S., Brookhuis, K.A., Wickens, C.D., Hancock, P.A.: State of science: mental
workload in ergonomics. Ergonomics (2015). https://doi.org/10.1080/00140139.2014.956151
352 A. Mackay et al.
anamackay@gmail.com
... Large et al. 2019;Tenhundfeld et al. 2019;Verberne, Ham, and Midden 2012) and being asked to self-assess trust through questionnaires (e.g. Gold et al. 2015;Mackay et al. 2020;Sato et al. 2019). This is consistent with a focus on understanding how system performance factors, e.g. ...
... Participants are brought into an unfamiliar environment and asked to self-assess trust before, during and or after experiencing technology in situations created by the researchers (e.g. Gold et al. 2015;Mackay et al. 2020;Sato et al. 2019); responses to technology are often determined through a set criteria of physiological measurements (Hergeth et al. 2016;Waytz, Heafner, and Epley 2014;Wintersberger et al. 2017). Hence, many studies tend to quantify trust to isolate the impact of distinct characteristics of automated systems on users' perceived trust in the system in confined interaction situations (e.g. ...
Article
Full-text available
This paper investigates elements of trust in autonomous vehicles (AVs). We contextualise autonomous vehicles as part of people's everyday settings to extend previous understandings of trust and explore trust in autonomous vehicles in concrete social contexts. We conducted online co-creation workshops with 22 participants, using design probes to explore trust and AVs in relation to people's everyday lives. Using a socio-technical perspective, we show how trust and acceptance depend not only on the underlying AV technology but also – if not more so – on human-to-human relationships and real-life social circumstances. We argue that when investigating issues of trust and automation, the scope of analysis needs to be broadened to include a more complex socio-technical set of (human and non-human) agents, to extend from momentary human-computer interactions to a wider timescale, and be situated in concrete spaces, social networks, and situations.
... Therefore, drivers and passengers will no longer need to concentrate on driving tasks, at this point, new vehicle interiors will need to be redesigned considering new technological and ergonomic aspects [21]. The questions designed to understand the expectations of future users in the context of autonomous car interior design have been created considering previous scientific studies that are based on participatory design methodologies about this topic [13,16,18]. In this domain, it is also important to highlight the need for design requirements appropriate to prevent possible motion sickness. ...
... To evaluate the Portuguese population's comfort in different autonomy levels, their opinion about different ergonomic aspects, and new cars' features, it was defined statements for respondents to evaluate, based on related literature [1,13,21]. The participants have to classify the statements using a 5-Likert scale [12], indicating their level of agreement with each particular statement. ...
Chapter
Full-text available
Recent technological news and developments demonstrate that autonomous cars and their interior will have the potential to offer numerous advantages to passengers and, consequently, it will affect their daily routines. Therefore, it is crucial to monitor the opinion of the population on this issue. The main objective of this study was to construct a questionnaire to evaluate the preferences of Portuguese users about human-centric car interiors and apply a pilot test to understand possible corrections that should be considered. The questionnaire was constructed based on a literature review and applied to a sample of 12 volunteers. According to the pilot test results, three questions need some improvements, and the participants made several suggestions. Overall participants’ evaluations of the questionnaire showed that the duration, structure, clarity of the text and relevance of the questions are adequate. The revised questionnaire will be an essential tool to collect the preferences of possible users of autonomous cars.
... The "how" only information led to worse driving performance and unsafe cooperation since the drivers tried to take the control back from the AV but did not understand why the AV behaved in that way. Mackay et al. (2019) investigation into different amounts of feedback found that "more information does not necessarily lead to more trust and may, in fact, negatively affect cognitive load". Ha et al. (2020) stated that type of explanation significantly affects trust in AVs and suggested an explanation format based on the attribution theory (Weiner, 1979). ...
... In this sense, situational trust appeared to be sensitive to SA. In particular, the participants' trust was significantly higher in SA L2 compared to SA L1 and L3, where the given information was either too little to foster the participants' perception and comprehension of the current situation or was redundant to notably improve trust (Mackay et al., 2019). One possible reason might be the out-of-the-loop problem, as Endsley and Kiris (1995) found that SA L2 was the most negatively affected level by automation, where people's understanding of the situation significantly decreased, pushing them out of the control loop. ...
Article
With the level of automation increases in vehicles, such as conditional and highly automated vehicles (AVs), drivers are becoming increasingly out of the control loop, especially in unexpected driving scenarios. Although it might be not necessary to require the drivers to intervene on most occasions, it is still important to improve drivers’ situation awareness (SA) in unexpected driving scenarios to improve their trust in and acceptance of AVs. In this study, we conceptualized SA at the levels of perception (SA L1), comprehension (SA L2), and projection (SA L3), and proposed an SA level-based explanation framework based on explainable AI. Then, we examined the effects of these explanations and their modalities on drivers’ situational trust, cognitive workload, as well as explanation satisfaction. A three (SA levels: SA L1, SA L2 and SA L3) by two (explanation modalities: visual, visual + audio) between-subjects experiment was conducted with 340 participants recruited from Amazon Mechanical Turk. The results indicated that by designing the explanations using the proposed SA-based framework, participants could redirect their attention to the important objects in the traffic and understand their meaning for the AV system. This improved their SA and filled the gap of understanding the correspondence of AV’s behavior in the particular situations which also increased their situational trust in AV. The results showed that participants reported the highest trust with SA L2 explanations, although the mental workload was assessed higher in this level. The results also provided insights into the relationship between the amount of information in explanations and modalities, showing that participants were more satisfied with visual-only explanations in the SA L1 and SA L2 conditions and were more satisfied with visual and auditory explanations in the SA L3 condition. Finally, we found that the cognitive workload was also higher in SA L2, possibly because the participants were actively interpreting the results, consistent with a higher level of situational trust. These findings demonstrated that properly designed explanations, based on our proposed SA-based framework, had significant implications for explaining AV behavior in conditional and highly automated driving.
... While Mackay et al. (2020) and Chang et al. (2019) suggested that more information does not always lead to increased trust, the nuances of how different levels of information interact with modality to influence trust and acceptance in partially automated vehicles have not been fully explored. Examination of how auditory information, when synchronised with visual cues, can maintain driver attention without causing distraction or irritation is still needed, as highlighted by Liu (2001) and Edworthy (1998). ...
... (1) The Trust in AVs questionnaire -designed to evaluate trust directly in AVs. It is a seven-item Likert scale ("Completely Agree" to "Completely Disagree") [33]. Cronbach's 0.86. ...
... Seppelt [126] demonstrated that continuous system feedback promotes trust calibration in automation. In contrast, after investigating various feedback levels, Mackay et al. [127] contest that more information does not always positively correlate to trust, allowing for the possibility of negative cognitive load effects. The way a system communicates its decisions to its users should also be tailored to user variability for a competent effect [31]. ...
Article
Full-text available
As complex autonomous systems become increasingly ubiquitous, their deployment and integration into our daily lives will become a significant endeavor. Human–machine trust relationship is now acknowledged as one of the primary aspects that characterize a successful integration. In the context of human–machine interaction (HMI), proper use of machines and autonomous systems depends both on the human and machine counterparts. On one hand, it depends on how well the human relies on the machine regarding the situation or task at hand based on willingness and experience. On the other hand, it depends on how well the machine carries out the task and how well it conveys important information on how the job is done. Furthermore, proper calibration of trust for effective HMI requires the factors affecting trust to be properly accounted for and their relative importance to be rightly quantified. In this article, the functional understanding of human–machine trust is viewed from two perspectives—human-centric and machine- centric. The human aspect of the discussion outlines factors, scales, and approaches, which are available to measure and calibrate human trust. The discussion on the machine aspect spans trustworthy artificial intelligence, built-in machine assurances, and ethical frameworks of trustworthy machines.
... Third, how much feedback is necessary, considering that additional information may overload users? Mackay et al. [24] state that "more information does not necessarily lead to more trust and may, in fact, negatively affect cognitive load". Consequently, we aimed at investigating the need for and the design of feedback, as well as the relation of trust and feedback desires, from the perspective of users of automated vehicles. ...
Article
Full-text available
The inappropriate use of automation as a result of trust issues is a major barrier for a broad market penetration of automated vehicles. Studies so far have shown that providing information about the vehicle's actions and intentions can be used to calibrate trust and promote user acceptance. However, how such feedback could be designed optimally is still an open question. This article presents the results of two user studies. In the first study, we investigated subjective trust and user experience of (N=21) participants driving in a fully automated vehicle, which interacts with other traffic participants in virtual reality. The analysis of questionnaires and semi-structured interviews shows that participants request feedback about the vehi-cle's status and intentions and prefer visual feedback over other modalities. Consequently, we conducted a second study to derive concrete requirements for future feedback systems. We showed (N=56) participants various videos of an automated vehicle from the ego perspective and asked them to select elements in the environment they want feedback about so that they would feel safe, trust the vehicle , and understand its actions. The results confirm a correlation between subjective user trust and feedback needs and highlight essential requirements for automatic feedback generation. The results of both experiments provide a scientific basis for designing more adaptive and person-alized in-vehicle interfaces for automated driving.
... First, it is not guaranteed that additional feedback always leads to higher trust. For example, Mackay et al. [22] conclude from a study investigating different feedback amounts that "more information does not necessarily lead to more trust and may, in fact, negatively affect cognitive load". Second, to effectively communicate system decisions, visualizations must become personalized and adapt to the user [7] to account for the high variability of drivers. ...
Conference Paper
Full-text available
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants' desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future. CCS CONCEPTS • Human-centered computing → User studies; Empirical studies in HCI; Empirical studies in visualization; Mixed / augmented reality. KEYWORDS trust in automation, automated driving, explainable artificial intelligence , augmented reality, feedback systems, user centered design, interview studies ACM Reference Format:
Conference Paper
Full-text available
Autonomous vehicles (AVs; SAE levels 4 and 5) face substantial challenges regarding acceptance and UX. Novel human-machine interfaces (HMIs) providing transparent system information could account for those and facilitate adoption. However, since the availability of AVs for early concept studies is limited, context-based interface prototyping is required. This paper demonstrates the prototype and wizard-of-oz-based on-road evaluation of a futuristic windshield HMI concept that visualizes real-time object detections via augmented reality (AR). In a mixed-methods within-subjects study (𝑁 = 30), participants assessed three early-stage concept variants to explore whether object detection visualization can counteract the aforementioned challenges. The findings confirm that transparent system feedback can increase understandability, perceived usefulness, and hedonic UX, but the amount and the timing of the provided information are crucial. The applied prototyping method proved suitable for investigating HMI concepts with realtime AR on urban roads. Based on a critical discussion, the paper concludes with design and prototyping recommendations.
Article
Full-text available
Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them.
Article
Full-text available
The integration of self-driving vehicles may expose individuals with health concerns to undue amounts of stress. Psychophysiological indicators of stress were used to determine changes in tonic and phasic stress levels brought about by a high-fidelity autonomous ve-hicle simulation. Twenty-eight participants completed one manual driving task and two automated driving tasks. Participants reported their subjective level of trust in the auto-mated systems using the Automation Trust Survey. Psychophysiological stress was in-dexed using skin conductance and trapezius muscle tension. Results indicate that users show more signs of physiological stress when the vehicle drives autonomously than when the users is in control. Results also indicate that users show an additional increase in stress when the user reports low trust in the autonomous vehicle. These findings suggest that health-care professionals and manufactures should be aware of additional stress associat-ed with self-driving technology.
Conference Paper
Full-text available
Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. , We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions., To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as, the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference, between the chauffeur and the baseline.
Article
Full-text available
The objective of this study is to examine the user’s adoption aspects of autonomous vehicle, as well as to investigate what factors drive people to trust an autonomous vehicle. A model explaining the impact of different factors on autonomous vehicles’ intention is developed based on the technology acceptance model and trust theory. A survey of 552 drivers was conducted and the results were analyzed using partial least squares. The results demonstrated that perceived usefulness and trust are major important determinants of intention to use autonomous vehicles. The results also show that three constructs—system transparency, technical competence, and situation management—have a positive effect on trust. The study identified that trust has a negative effect on perceived risk. Among the driving-related personality traits, locus of control has significant effects on behavioral intention, whereas sensation seeking did not. This study investigated that the developed model explains the factors that influence the acceptance of autonomous vehicle. The results of this study provide evidence on the importance of trust in the user’s acceptance of an autonomous vehicle.
Article
Full-text available
This study explores, in the context of semi-autonomous driving, how the content of the verbalized message accompanying the car’s autonomous action affects the driver’s attitude and safety performance. Using a driving simulator with an auto-braking function, we tested different messages that provided advance explanation of the car’s imminent autonomous action. Messages providing only “how” information describing actions (e.g., “The car is braking”) led to poor driving performance, whereas “why” information describing reasoning for actions (e.g., “Obstacle ahead”) was preferred by drivers and led to better driving performance. Providing both “how and why” resulted in the safest driving performance but increased negative feelings in drivers. These results suggest that, to increase overall safety, car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers.
Article
For drivers on monotonous routes, cognitive fatigue causes discomfort and poses an important risk for traffic safety. Countermeasures against this type of fatigue are required and thermal stimulation is one intervention method. Surprisingly, there are hardly studies available to measure the effect of cooling while driving. Hence, to better understand the effect of short-term cooling on the perceived sleepiness of car drivers, a driving simulator study (n = 34) was conducted in which physiological and vehicular data during cooling and control conditions were compared. The evaluation of the study showed that cooling applied during a monotonous drive increased the alertness of the car driver. The sleepiness rankings were significantly lower for the cooling condition. Furthermore, the significant pupillary and electrodermal responses were physiological indicators for increased sympathetic activation. In addition, during cooling a better driving performance was observed. In conclusion, the study shows generally that cooling has a positive short-term effect on drivers’ wakefulness; in detail, a cooling period of 3 min delivers best results.
Conference Paper
Recent technological advances in automated cars have brought them closer to regular use on the roads. Accordingly, research on the driver's perspective is increasing. However, previous studies have limitations in terms of the length of the study period and richness of the user's experience. In this paper, we conducted an ethnographic experiment to observe the interaction between humans and automated cars. Six participants rode in a prototype automated car on a real road one hour a day for six days under various weather conditions. We found that even after six days of utilizing it, participants did not fully trust the automated car. We identified nine distrust factors that strongly influenced their experiences in the automated car, classifying them according to Lee and See's classification of three trust categories: process, performance, and purpose. We also present serval ideas based on the study results.
Article
Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.