Conference PaperPDF Available

Abstract and Figures

A driving agent can be an effective interface to interact with drivers to increase trust towards the autonomous driving vehicle. While driving research on agent has mostly focused on the voice-agent, little empirical findings on the robot-agent were reported. In the present study, we compared three different agents (informative voice-agent, informative robot-agent, and conversational robot-agent) to investigate their effects on driver perception in Level 5 autonomous driving. A driving simulator experiment with an agent was conducted. Twelve drivers experienced a simulated autonomous driving and responded to Godspeed questionnaire, RoSAS questionnaire, and social presence. Drivers rated the conversational robot-agent as significantly more competent, warmer, and providing higher social presence than the other two agents. Interestingly, despite this emotional closeness, drivers' attitude toward the conversational robot-agent was contradictory. They mostly chose the conversational robot-agent as the best option or the worst option. Findings of the present study are meaningful as a first step of exploring the potential of various types of in-vehicle agents in the context of autonomous driving.
Content may be subject to copyright.
Autonomous Driving with an Agent:
Speech Style and Embodiment
Abstract
A driving agent can be an effective interface to interact with
drivers to increase trust towards the autonomous driving
vehicle. While driving research on agent has mostly focused
on the voice-agent, little empirical findings on the robot-
agent were reported. In the present study, we compared
three different agents (informative voice-agent, informative
robot-agent, and conversational robot-agent) to investigate
their effects on driver perception in Level 5 autonomous
driving. A driving simulator experiment with an agent was
conducted. Twelve drivers experienced a simulated
autonomous driving and responded to Godspeed
questionnaire, RoSAS questionnaire, and social presence.
Drivers rated the conversational robot-agent as significantly
more competent, warmer, and providing higher social
presence than the other two agents. Interestingly, despite
this emotional closeness, drivers’ attitude toward the
conversational robot-agent was contradictory. They mostly
chose the conversational robot-agent as the best option or
the worst option. Findings of the present study are
meaningful as a first step of exploring the potential of
various types of in-vehicle agents in the context of
autonomous driving.
Author Keywords
Autonomous driving; voice agent; robot agent; speech style;
embodiment
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for third-party components of this work
must be honored. For all other uses, contact the Owner/Author.
AutomotiveUI '19 Adjunct, September 2125, 2019, Utrecht, Netherlands
© 2019 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-6920-6/19/09.
https://doi.org/10.1145/3349263.3351515
Seul Chan Lee
Virginia Tech
Blacksburg, VA 24061, USA
seulchan0926@gmail.com
Harsh Sanghavi
Virginia Tech
Blacksburg, VA 24061, USA
sangjinko@vt.edu
Sangjin Ko
Virginia Tech
Blacksburg, VA 24061, USA
sangjinko@vt.edu
Myounghoon Jeon
Virginia Tech
Blacksburg, VA 24061, USA
myounghoonjeon@vt.edu
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
209
CCS Concepts
Hardware
Emerging technologies Analysis and design of
emerging devices and systems, Emerging interfaces
Introduction
As automated systems become more common, the concept
of human-computer interaction has evolved in a cooperative
way to increase convenience in people’s lives. This trend
can also be found in the context of driving. In the past,
drivers were required to perceive information, make
decisions, stay in their lane, and maintain the speed all by
themselves. These days, however, advanced driving
assistant systems enable vehicles to assist drivers in many
sub-tasks required for driving, such as lane keeping, speed
control, and route planning. The development of
autonomous vehicles is further accelerating these changes.
In other words, human-vehicle cooperation is an essential
element of future driving [3].
Nevertheless, many drivers are still reluctant to handover
control to the vehicle because of a lack of trust [5]. They
have doubts about a novel system that they have never
experienced before. Therefore, for the adoption of the
automated driving system, it is important to increase drivers’
trust in the automated system using the techniques of
human-vehicle interaction. One of the effective ways to
increase acceptance is using a tangible interface to show
the intentions of the system and allow users to understand
the system’s behavior. Implementing a physical or virtual
agent to interact with drivers can increase the feeling of co-
presence toward the system and give the drivers confidence
that the system is working properly.
To d a te , voice-agents have been widely implemented and
tested in the context of manual driving for the purpose
mentioned above [7,9]. Recently, researchers have tried to
explore the use of robot-agents in the autonomous driving
context [2,1012,15,16]. Karatas et al. [10] showed that the
existence of a social robot system, equipped with three
movable heads with one degree of freedom, reduced the
reaction time to accident situations in the context of
autonomous driving. Kraus et al. [11] revealed that driver’s
trust towards automated driving systems can be enhanced
more by using a robot-agent than using a voice-only agent.
Zihsler et al. [16] tried to give information about the system
status of an automated driving vehicle by using a virtualized
avatar. This research, although in its nascent phase, has
shown the possible use cases of having a robot-agent
interact with drivers in an automated driving context.
In the present study, as a first step of exploring the potential
of various types of in-vehicle agents in the context of
autonomous driving, we compared the effects of three
different agents on driver’s perception. The role of the
agents was to give driving-related information so that the
driver can notice the intention and status of the automated
driving system. The informative voice-agent had no physical
entity and conveyed driving information via dry speech. The
informative robot-agent was an embodied robot and
conveyed the same information via dry speech. The
conversational robot-agent provided the same information
with more conversational speech, which is more human like.
Literature shows that drivers prefer interactions with the
agents that show more human like features, giving a
positive effect on the driver’s perception of trust towards the
autonomous driving system [6,11]. Based on these findings,
we hypothesized that drivers would prefer the robot-agent to
the voice-agent and the conversational type to the simple
information type.
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
210
To test these hypotheses, a driving simulator experiment
was designed and conducted. We collected and analyzed
the subjective evaluations from participants after they
experienced an automated driving situation with the agents.
Method
Experimental design A within-subject design was
implemented with three conditions (informative voice-agent
(IVA), informative robot-agent (IRA), and conversational
robot-agent (CRA)). In each condition, participants
experienced an autonomous driving journey in a driving
simulator with one of three agents. Participants ran into
eight driving events in each condition to show that the
automated driving system enables to appropriately control
the driving events (Table 1). Participants were given verbal
feedback from one of the agents in each condition before or
right after the events. Two informative agents gave
information to drivers in a simple manner; in contrast, the
conversational robot-agent conveyed information in a way
that would replicate a usual conversation with a human
collaborator (e.g., “we are entering…”, “I am sorry…”)
(Tables 2 and 3). The driving scenarios were designed
based on a straight and curved road including some traffic,
traffic signals, intersections, and road users (Figure 1). In
order to minimize the learning effects, we used one of three
different driving scenarios designed with the same city map
and the same events, however, the route and the order of
events were different for each condition (approximately 6.5
minutes). The order of the conditions was also randomized
to prevent the learning effects.
Dependent measures To co l le c t d r iv e rs ’ p e rc e pt i on o n t he
in-vehicle agent, subjective ratings of three different agents
were collected using the Godspeed questionnaire (five
factors with 24 items, 5-point Likert scale) [1], RoSAS
questionnaire (three factors with 18 items, 7-point Likert
scale) [4], and social presence (five items, 10-point Likert
scale) [13]. Also, the preference ranking among three
agents was measured.
Participants Twe lv e participants (4 female) between 20 and
42 years old (mean = 30.25, SD = 5.80) with a valid driving
license participated in this study. The mean driving year was
6.92 years and they drove an average of 5.83 times per
week.
Apparatus and stimuli The driving simulator used was a
motion-based driving simulator (NervtechTM, Ljubljana,
Slovenia) as shown in Figure 2. It was equipped with three
48’’ Samsung displays, a steering wheel, an adjustable seat,
gas and brake pedals, and a surrounded sound dome. A
humanoid robot, Nao, was used in the two robot-agent
conditions (V6 standard edition, height: 22.6 in., width: 10.8
in.). Amazon Polly text-to-speech software was used to
design the agent’s voices (https://aws.amazon.com/polly,
name: Sallie, gender: female voice, nationality: USA).
Procedure All the details about the experiment were
explained after welcoming the participants. Then, the
consent form confirmed by the Institutional Review Board
(IRB) of the university was signed and demographic
information was collected. Participants were assumed to be
in a Level 5 autonomous driving car; they were told that they
can do non-driving-related tasks if they want, such as
Internet surfing on smartphone. After every condition was
completed, participants responded to the questionnaires.
When they finished all three conditions, their preference and
the reason was lastly asked. It took approximately 45
minutes to complete all procedures.
Data analysis We could only include a small sample size as
this study is the starting point of the project. Accordingly, a
Table 1: Driving events in the scenario
Event list
1. Exit from the road and enter a new road
2. Road construction
3. Swerving a car
4. Tunnel
5. Jaywalking
6. Waiting for traffic signal
7. Turning left / right
8. All way stop intersection
Table 2: Informative agent’s script
Script list
1. Exit ahead
2. Road construction ahead
3. A car is swerving
4. Tunnel ahead
5. Jaywalker ahead
6. Red traffic light
7. Turning left / right ahead
8.This is a four-way stop intersection
Table 3: Conversational agent’s script
Script list
1. We are entering a new road
2. We are slowing down because of road
construction
3. I am sorry for the sudden slow down. A
car swerved into traffic
4. We are entering a tunnel
5. Are you okay? A man suddenly popped
out onto the road
6. We are waiting for the signal to turn green
7. We are turning left / right
8.We’ve reached a four-way stop
intersection. We are waiting for other cars
to go first
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
211
non-parametric Friedman test was performed to analyze the
differences among the conditions for each measure. If a
significant difference was found, a Wilcoxon signed-ranks
test was used for the pairwise comparisons. The 0.05
significance level was applied. All statistical analyses were
performed by IBM SPSS 25.0.
Results
The results showed there are no significant differences
among the three conditions in measures of Godspeed.
However, drivers rated the CRA as significantly more
competent and warmer than the other two types. In addition,
the social presence of the CRA was significantly higher than
the other conditions. Ta bl e 4 presents the summary of
statistical analyses. Preference rankings for each condition
were not significantly different from each other (Table 5).
Table 4: Subjective evaluation results
Items
Agent condition
Sig.
Pairwise
comparisons
IVA
IRA
CRA
Anthropomorp
hism
3.03
2.95
3.82
p = .166
-
Animacy
2.76
2.68
3.53
p = .063
-
Likeability
3.6
3.42
3.97
p = .307
-
Perceived
intelligence
3.82
3.87
3.98
p = .614
-
Perceived
safety
3.39
3.14
3.61
p = .179
-
Competence
4.61
4.89
5.47
p < 0.01
A = B < C*
Warmth
3.08
3.25
4.93
p < 0.01
A = B < C*
Discomfort
2.54
2.47
2.47
p = .928
-
Social
Presence
4.97
4.92
6.80
p < 0.01
A = B < C**
Note. *: p < 0.05, **: p < 0.01
Table 5: Preference rankings for each condition
Preference
Condition
IVA
IRA
CRA
1st
4
3
5
2nd
5
6
1
3rd
3
3
6
Note. Unit: no. of participants
Discussion and Conclusion
We explored the driverssubjective evaluations to three
agents in the context of autonomous driving. We were able
to identify some possibilities of the use of the robot-agent in
the autonomous driving.
First, drivers felt more intimate about the guidance of the
CRA than the informative agents. The close emotional band
is likely to play a key role in the acceptance of the system.
Second, despite this emotional closeness, drivers’ attitude
toward the CRA was contradictory. They mostly chose the
CRA as the best option or worst option (Table 5). In the case
of preferred drivers, friendly, natural, and emotional
touchwere reasons. However, in the opposite case, they
pointed out that it gives too much informationand
distracted. Therefore, it is necessary to make sure that the
agent should be familiar to drivers but not irritating drivers.
In future study, we can extend the approach to more
accurately understand the driver’s interaction with an agent.
In the present study, we only include subjective evaluations.
Driversbehaviors, such as glance behaviors, can be used
to observe their interaction with an agent. In addition,
different characteristics of the voice-agent suggested by
previous literature [8,14], for example, gender, urgency,
voice emotion, and speaking style (command vs.
suggestive/notification) should be tested in a robot-agent.
Figure 2: Experimental Settings
Figure 1: Screenshot of driving scenarios
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
212
References
1. Christoph Bartneck, Dana Kulić, Elizabeth Croft, and
Susana Zoghbi. 2009. Measurement instruments for the
anthropomorphism, animacy, likeability, perceived
intelligence, and perceived safety of robots. International
Journal of Social Robotics 1, 1: 7181.
https://doi.org/10.1007/s12369-008-0001-3
2. Xiaojun Bi, Yang Li, and Shumin Zhai. 2013. FFitts law:
Modeling Finger Touch with Fitts’ Law. Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems - CHI ’13: 1363.
https://doi.org/10.1145/2470654.2466180
3. Francesco Biondi, Ignacio Alvarez, and Kyeong Ah Jeong.
2019. HumanVehicle Cooperation in Automated Driving:
A Multidisciplinary Review and Appraisal. International
Journal of Human-Computer Interaction 35, 11: 932946.
https://doi.org/10.1080/10447318.2018.1561792
4. Colleen M Carpinella, Michael A Perez, Alisa B Wyman,
and Steven J Stroessner. 2017. The Robotic Social
Attributes Scale (RoSAS): Development and Validation.
ACM/IEEE International Conference of HRI: 254262.
https://doi.org/10.1145/2909824.3020208
5. Jong Kyu Choi and Yong Gu Ji. 2015. Investigating the
Importance of Trust on Adopting an Autonomous Vehicle.
International Journal of Human-Computer Interaction 31,
10: 692702.
https://doi.org/10.1080/10447318.2015.1070549
6. Philipp Hock, Johannes Kraus, Marcel Walch, Nina Lang,
and Martin Baumann. 2016. Elaborating Feedback
Strategies for Maintaining Automation in Highly Automated
Driving. In 8th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications.
(AutomotiveUI ’16), 105112.
https://doi.org/10.1145/3003715.3005414
7. Myounghoon Jeon, Bruce N. Walker, and Thomas M.
Gable. 2015. The effects of social interactions with in-
vehicle agents on a driver’s anger level, driving
performance, situation awareness, and perceived
workload. Applied Ergonomics 50: 185199.
https://doi.org/10.1016/j.apergo.2015.03.015
8. Myounghoon Jeon, Bruce N. Walker, and Thomas M.
Gable. 2015. The effects of social interactions with in-
vehicle agents on a driver’s anger level, driving
performance, situation awareness, and perceived
workload. Applied Ergonomics 50: 185199.
https://doi.org/10.1016/j.apergo.2015.03.015
9. Yeon Kyoung Joo and Roselyn J. Lee-Won. 2016. An
Agent-Based Intervention to Assist Drivers Under
Stereotype Threat: Effects of In-Vehicle Agents’
Attributional Error Feedback. Cyberpsychology, Behavior,
and Social Networking 19, 10: 615620.
https://doi.org/10.1089/cyber.2016.0153
10. Nihan Karatas, Soshi Yoshikawa, Shintaro Tamura, Sho
Otaki, Ryuji Funayama, and Michio Okada. 2017. Sociable
driving agents to maintain driver’s attention in autonomous
driving. In RO-MAN 2017 - 26th IEEE International
Symposium on Robot and Human Interactive
Communication, 143149.
https://doi.org/10.1109/ROMAN.2017.8172293
11. Johannes Maria Kraus, F Nothdurft, P Hock, D Scholz, W
Minker, and M Baumann. 2016. Human after all: Effects of
mere presence and social interaction of a humanoid robot
as a co-driver in automated driving. In 8th International
Conference on Automotive User Interfaces and Interactive
Vehicular Applications, AutomotiveUI 2016, 129134.
https://doi.org/10.1145/3004323.3004338
12. Johannes Maria Kraus, Jessica Sturn, Julian Elias Reiser,
and Martin Baumann. 2015. Anthropomorphic agents,
transparent automation and driver personality: towards an
integrative multi-level model of determinants for effective
driver-vehicle cooperation in highly automated vehicles.
Adjunct Proceedings of the 7th International Conference
on Automotive User Interfaces and Interactive Vehicular
Applications - AutomotiveUI ’15: 813.
https://doi.org/10.1145/2809730.2809738
13. Kwan Min Lee, Younbo Jung, Jaywoo Kim, and Sang
Ryong Kim. 2006. Are physically embodied social agents
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
213
better than disembodied social agents?: The effects of
physical embodiment, tactile interaction, and people’s
loneliness in human-robot interaction. International Journal
of Human Computer Studies 64, 10: 962973.
https://doi.org/10.1016/j.ijhcs.2006.05.002
14. Clifford Nass, Ing-Marie Jonsson, Helen Harris, Ben
Reaves, Jack Endo, Scott Brave, and Leila Takayama.
2005. Improving automotive safety by pairing driver
emotion and car voice emotion. May 2014: 1973.
https://doi.org/10.1145/1056808.1057070
15. Satoshi Okamoto and Shin Sano. 2017. Anthropomorphic
AI Agent Mediated Multimodal Interactions in Vehicles. In
9th International ACM Conference on Automotive User
Interfaces and Interactive Vehicular Applications
(AutomotiveUI ’17), 110114.
https://doi.org/10.1145/3131726.3131736
16. Jens Zihsler, Philipp Hock, Marcel Walch, Kirill Dzuba,
Denis Schwager, Patrick Szauer, and Enrico Rukzio. 2016.
Carvatar: Increasing Trust in Highly-Automated Driving
Through Social Cues. In Proceedings of the 8th
International Conference on Automotive User Interfaces
and Interactive Vehicular Applications Adjunct -
Automotive’UI 16, 914.
https://doi.org/10.1145/3004323.3004354
AutomotiveUI '19 Adjunct, September 21-25, 2019, Utrecht, Netherlands
214
... IVAs can explain the system status and intentions of an automated vehicle (AV) [8,13,19,21,29]. The user interface (UI) of IVAs can be a voice UI [13], a visual UI [8], or a physical UI [3]. Lee and Jeon [18] suggest that physical agents aid better results in driving behaviour and overall experience, especially in the context of automated driving (AD). ...
... While robot agents can be visually distracting yet increase trust, voice agents are preferred in low-speed situations [26]. Drivers have mixed attitudes toward conversational robot agents [19]. Both voice and robot agents improve likability and perceived warmth, with voice agents better at anthropomorphism and robot agents offering higher competence and lower workload [25]. ...
... To understand the attitudes and expectations about IVA and driving behaviour in Asian countries and Europe, five The IVA prototype is not a robot that acts independently but a physical form of the whole driving assistant system [19]. Figure 1 (b) shows the 3D model created in Rhino 8 (STL files are in supplementary material). ...
Preprint
Full-text available
With the rapid development of automotive technology and artificial intelligence, in-vehicle agents have a large potential to solve the challenges of explaining the system status and the intentions of an automated vehicle. A robot-like in-vehicle agent was developed to explore the in-vehicle agent communicating through gestures and facial expressions with a driver in an SAE Level 3 automated vehicle. An experiment with 12 participants was conducted to evaluate the prototype. Results showed that both interactions of facial expressions and gestures can reduce workload, and increase usefulness, and satisfaction. However, gestures seem to be more functional and more preferred by the driver while facial expressions seem to be more emotional and preferred by passengers. Furthermore, gestures are easier to notice but hard to understand independently and facial expressions are hard to notice but more attractive.
... Embodiment is defined as whether the agent has a tangible interface to interact with drivers. Embodied agents having physical (i.e., a physical robot) or virtual (i.e., on a screen) presence are argued to increase the sense of system co-presence and provide drivers with confidence in system capability (S. C. Lee et al., 2019). Existing research has explored the effects of embodiment on driver perception and performance, but the results did not reach a consensus on whether embodiment has consistent benefits across levels of automation. ...
... In the present study, we selected two representative speech styles -informative vs. conversational. The informative speech style conveys information via dry speech, while the conversational style delivers information using daily conversational speech (S. C. Lee et al., 2019). These two styles are able to create distinct perceptions: the informative style is typically perceived as commanding due to its simplicity and directness, whereas the conversational one is perceived as more suggestive and can create a feeling of being cared for (M. ...
... Previous studies have also explored robots and avatars for communication with AVs and raised the need for social cues. These studies typically introduced a robot for mediating the communication between a driver participant and vehicles with different levels of automation [6,24,31,32,62]. The robot represented the vehicle's status and intent using social cues in order to facilitate the collaboration required for driving the vehicle. ...
... They showed that it is possible to increase drivers' trust if the vehicle's spoken-dialogue system is applied to an NAO robot that uses social behaviors [28]. Similarly, Lee et al. indicated that the embodiment and politeness of a co-driving agent are central factors determining the trust in the vehicle [32]. We extend this literature by focusing on passengers in a fully autonomous vehicle. ...
... While anthropomorphic sounds have been shown to facilitate interaction with IVIS (Large et al., 2019;Waytz et al., 2014), it is unclear whether anthropomorphic appearance can improve the perceived usability of in-vehicle agents or affect drivers' cognitive workload in the driving environment. Studies comparing passenger perception towards voice-only and robot agents in fully autonomous driving found that robot agents received higher capability ratings and lower perceived workload ratings (Lee et al., 2019;Wang, Lee, et al., 2021). ...
Article
To improve the interaction between drivers and the in‐vehicle information system (IVIS), various intelligent agents, such as robot agents, virtual agents, and voice‐only agents, have been integrated into vehicles. However, it is not yet clear which type of in‐vehicle agent is best suited to the driving context. This study aims to investigate the effect of in‐vehicle agent embodiment on drivers' perceived usability and cognitive workload. In a within‐subject simulated driving experiment, 22 participants interacted with three different in‐vehicle agents (smartphone agent, robot agent, and virtual agent). Functional near‐infrared spectroscopy and electrocardiogram (ECG) were used to record prefrontal cortex activation and electrical changes associated with cardiac activity during simulated driving, respectively. The results show that the smartphone agent had the lowest perceived usability scores, oxygenated hemoglobin concentration variation (ΔHbO), and maximum ECG signal variation compared to baseline. There were no statistical differences in cognitive workload, perceived usability scores, brain area activation, and ECG signals between the robot agent and the virtual agent. The research findings demonstrate the positive effects of the anthropomorphic appearance of in‐vehicle agents on perceived usability and contribute to improving the design of in‐vehicle intelligent agents.
... In the interview, some passengers told that "I like the eHMI-AV better than the eHMI-NV because I feel that the eHMI-AV is more communicative," as well as "I think the robot character (i.e., APMV's eHMI) should have its own personality." Above results are in line with (Lee et al., 2019), where the more anthropomorphic voice was favored and rated higher in trust, pleasure, and the sense of control than a voice-command interface. Besides, the speech embodiment of more conversational speech was evaluated as more warm and social presence (Wang et al., 2021). ...
Article
Full-text available
Autonomous personal mobility vehicle (APMV) is a miniaturized autonomous vehicle designed for short-distance mobility to everyone. Due to its open design, APMV's passengers are exposed to communications between the external human-machine interface (eHMI) on APMV and pedestrians. Therefore, effective eHMI designs for APMV need to consider potential impacts of APMV-pedestrian interactions on passengers' subjective feelings. This study from the perspective of APMV passengers discussed three eHMI designs: (1) graphical user interface (GUI)-based eHMI with text message (eHMI-T), (2) multimodal user interface (MUI)-based eHMI with neutral voice (eHMI-NV), and (3) MUI-based eHMI with affective voice (eHMI-AV). In a riding field experiment (N = 24), eHMI-T made passengers feel awkward during the "silent time" when eHMI-T conveyed information exclusively to pedestrians, not passengers. MUI-based eHMIs with voice cues showed advantages, with eHMI-NV excelling in pragmatic quality and eHMI-AV in hedonic quality. Considering passengers' personalities and genders in APMV eHMI design is also highlighted.
... These findings provide compelling evidence that the affective empathy style agent has the potential to assist drivers in maintaining a more stable driving experience following a hazardous road event. Research has shown that a conversational robot agent was perceived as more competent and warmer than an informative voice agent or an informative robot agent [15]. Another study has shown that a suggesting agent was preferred over a commanding agent by angry drivers even though both led to better driving performance than no agent [12]. ...
Article
Emotion and a broader range of affective and cognitive states play an important role on the road. While this has been predominantly investigated in terms of driver safety, the approaching advent of autonomous vehicles (AVs) is expected to bring a fundamental shift in focus for emotion recognition in the car, from the driver to the passengers. This work presents a number of affect-enabled applications, including adapting the driving style for an emotional experience or tailoring the infotainment to personal preferences. It attempts to foresee upcoming challenges and provides suggestions for multimodal affect modelling, with a focus on the audio and visual modalities. In particular, this includes context awareness, reliable diarisation of multiple passengers, group affect, and personalisation. Finally, we provide some recommendations on future research directions, including explainability, privacy, and holistic modelling.
Article
Full-text available
To draw a comprehensive and cohesive understanding of human–vehicle cooperation in automated driving, a review is made on key studies in human–robot interaction and human factors. Throughout this article, insight is provided into how human drivers and vehicle systems interplay and influence each other. The limitations of technology-centered taxonomies of automation are discussed and the benefits of accounting for human agents are examined. The contributions of machine learning to automated driving and how critical models in human-system cooperation can inform the design of a more symbiotic relationship between driver and vehicle are investigated. Challenges in the human element to enable the safe introduction of road automation are also discussed. Particularly, the unintended consequences of vehicle automation on driver’s workload, situation awareness and trust are examined, and the social interactions between driver, vehicle, and other road users are investigated. This review will help professionals shape future directions for safer and more efficient and effective human–vehicle cooperation.
Conference Paper
Full-text available
This paper introduces a user experience design project that aspires to redefine the relationship between humans and vehicles. The authors have designed the visual representation of an artificial intelligence (AI) agent, Yui. The agent is intended to elicit a sense of collaboration with the vehicle, and enable to build rapport between driver and vehicle. In the age of automated vehicles (AVs) it is critical to develop such a cooperative relationship between humans and vehicles in order for the people to more readily adopt AVs. Autonomous capability is expected to roll out in progressive phases. In the phases of partially or conditional AVs, drivers need to be engaged and alert at all time so they can quickly take back control of the vehicle. This project attempts to utilize speculative prototypes of an anthropomorphic AI agent to explore the factors necessary to engage drivers in safe collaboration with AVs.
Conference Paper
Full-text available
Accurately measuring perceptions of robots has become increasingly important as technological progress permits more frequent and extensive interaction between people and robots. Across four studies, we develop and validate a scale to measure social perception of robots. Drawing from the Godspeed Scale and from the psychological literature on social perception, we develop an 18-item scale (The Robotic Social Attribute Scale; RoSAS) to measure people's judgments of the social attributes of robots. Factor analyses reveal three underlying scale dimensions-warmth, competence, and discomfort. We then validate the RoSAS and show that the discomfort dimension does not reflect a concern with unfamiliarity. Using images of robots that systematically vary in their machineness and gender-typicality, we show that the application of these social attributes to robots varies based on their appearance.
Conference Paper
Full-text available
While driving in highly automated new problems occur which are not present in manual driving. Highly automated cars have different internal states which take into account, what was detected, were the car is and to which maneuvers this could lead. One problem is that the driver may not be able to see in which state the car is currently because the functionality is too complex. If drivers are not able to compare the car’s actions to the actions they would have performed themselves they might have trouble trusting the system. It is not apparent what the car recognizes and which future actions are planned. Our avatar uses social cues and anthropomorphism to translate the car’s state into human behavior and expressions which can be interpreted intuitively by the driver. The driver is therefore more aware of the situation and might gain more trust in the system.
Article
Full-text available
The objective of this study is to examine the user’s adoption aspects of autonomous vehicle, as well as to investigate what factors drive people to trust an autonomous vehicle. A model explaining the impact of different factors on autonomous vehicles’ intention is developed based on the technology acceptance model and trust theory. A survey of 552 drivers was conducted and the results were analyzed using partial least squares. The results demonstrated that perceived usefulness and trust are major important determinants of intention to use autonomous vehicles. The results also show that three constructs—system transparency, technical competence, and situation management—have a positive effect on trust. The study identified that trust has a negative effect on perceived risk. Among the driving-related personality traits, locus of control has significant effects on behavioral intention, whereas sensation seeking did not. This study investigated that the developed model explains the factors that influence the acceptance of autonomous vehicle. The results of this study provide evidence on the importance of trust in the user’s acceptance of an autonomous vehicle.
Conference Paper
Human errors are a major reason for traffic accidents. One of the aims of the introduction of automated driving functions in vehicles is to prevent such accidents as such systems are supposed to be more reliable, react faster with higher preci- sion. Therefore, we assume that an increase of automation features will also increase safety. However, when drivers are not willing to relinquish control to the vehicle, safety benefits of automated vehicles do not take effect. Therefore, con- vincing drivers to actively make use of the automation when appropriate can increase traffic safety. In this paper we inves- tigate the influence of system feedback in proactive, safety critical takeover situations in automated driving. In contrast to handover, which is initiated by the system, proactive takeover is initiated by the driver, who’s intention for steering the car is the reason for driving manually. We compare auditory feedback with audio-visual feedback realized as a virtual co- driver in a user study. We conducted a virtual reality simulator study (n=38) to investigate how system feedback influences the willingness of drivers to relinquish control to the vehicle. There were three conditions of system feedback: in condi- tion none no feedback was given, in condition audio spoken feedback was given, and in condition co-driver additionally to audio feedback, a virtual co-driver on the front passenger seat was displayed. Our research provides evidence that system feedback can lead to an increase of willingness to maintain automation and to follow its safety related advices.
Conference Paper
In automated cars social agents could facilitate trust generation. We conducted a driving simulator study in which we manipulated the human -likeness of a spoken-dialogue system (SDS). In a simple rural road overtaking paradigm users either interacted with a SDS with no physical presence (Voice Only), with an SDS represented by the humanoid robot NAO providing road advice (NAO) or with a NAO with additional social interaction (Social NAO). The Social NAO condition showed a significantly higher trust as compared to the Voice Only condition. All group differences were associated with medium effect sizes. This study provides some evidence that anthropomorphic features increase trust in automated driving systems.
Article
For members of a group negatively stereotyped in a domain, making mistakes can aggravate the influence of stereotype threat because negative stereotypes often blame target individuals and attribute the outcome to their lack of ability. Virtual agents offering real-time error feedback may influence performance under stereotype threat by shaping the performers' attributional perception of errors they commit. We explored this possibility with female drivers, considering the prevalence of the "women-are-bad-drivers" stereotype. Specifically, we investigated how in-vehicle voice agents offering error feedback based on responsibility attribution (internal vs. external) and outcome attribution (ability vs. effort) influence female drivers' performance under stereotype threat. In addressing this question, we conducted an experiment in a virtual driving simulation environment that provided moment-to-moment error feedback messages. Participants performed a challenging driving task and made mistakes preprogrammed to occur. Results showed that the agent's error feedback with outcome attribution moderated the stereotype threat effect on driving performance. Participants under stereotype threat had a smaller number of collisions when the errors were attributed to effort than to ability. In addition, outcome attribution feedback moderated the effect of responsibility attribution on driving performance. Implications of these findings are discussed.
Conference Paper
This paper introduces a model integrating research on antecedents of safe and enjoyable interaction with highly automated advanced driver assistance systems (ADAS). It focuses on the psychological processes during the initial encounters with a system, in which system features interact with personality factors in building up beliefs and attitudes about a system affecting the further usage of the system. Our ongoing study is conducted online and aims at validating and elaborating the introduced model. In the study, participants are introduced to a self-driving agent, with whom they interact in a very simplified way. Besides testing our model the study investigates the effects of system transparency and anthropomorphism on user perceptions and their cooperation with the system. Our study will provide insights on how individual differences can be respected in designing interfaces that promote certain psychological conditions in turn leading to efficient cooperation with automated systems.