Conference PaperPDF Available

A Research Agenda for Mixed Reality in Automated Vehicles

Authors:

Abstract and Figures

With the increasing knowledge advancements and availability of mixed reality (MR), the number of its purposes and applications in vehicles rises. Mixed reality may help to increase road safety, support more immersive (non-) driving related activities, and finally enhance driving and passenger experience. MR may also be the enabling technology to increase trust and acceptance in automated vehicles and therefore help on the transition towards automated driving. Further, automated driving extends use cases of virtual reality and other immersive technologies. However, there are still a number of challenges with the use of mixed reality when applied in vehicles, and also several human factors issues need to be solved. This paper aims at presenting a research agenda for using mixed reality technology for automotive user interfaces (UIs) by identifying opportunities and challenges.
Content may be subject to copyright.
A Research Agenda for Mixed Reality in Automated Vehicles
Andreas Riegler
University of Applied Sciences Upper
Austria
Hagenberg, Austria
Johannes Kepler University Linz
Linz, Austria
andreas.riegler@fh-hagenberg.at
Andreas Riener
Technische Hochschule Ingolstadt
Ingolstadt, Germany
andreas.riener@thi.de
Clemens Holzmann
University of Applied Sciences Upper
Austria
Hagenberg, Austria
clemens.holzmann@fh-ooe.at
ABSTRACT
With the increasing knowledge advancements and availability of
mixed reality (MR), the number of its purposes and applications
in vehicles rises. Mixed reality may help to increase road safety,
support more immersive (non-) driving related activities, and nally
enhance driving and passenger experience. MR may also be the
enabling technology to increase trust and acceptance in automated
vehicles and therefore help on the transition towards automated
driving. Further, automated driving extends use cases of virtual
reality and other immersive technologies. However, there are still
a number of challenges with the use of mixed reality when applied
in vehicles, and also several human factors issues need to be solved.
This paper aims at presenting a research agenda for using mixed re-
ality technology for automotive user interfaces (UIs) by identifying
opportunities and challenges.
CCS CONCEPTS
Human-centered computing Mixed / augmented reality
;
Virtual reality
;
Interaction techniques
;User studies;Scenario-
based design;Interface design prototyping.
KEYWORDS
research agenda, mixed reality, automotive user interfaces, auto-
mated driving
ACM Reference Format:
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2020. A Research
Agenda for Mixed Reality in Automated Vehicles. In 19th International
Conference on Mobile and Ubiquitous Multimedia (MUM 2020), November
22–25, 2020, Essen, Germany. ACM, New York, NY, USA, 13 pages. https:
//doi.org/10.1145/3428361.3428390
1 INTRODUCTION
Since 2014, the number of scientic publications on mixed real-
ity in automated driving has increased rapidly (see Figure 1). We
searched for augmented, virtual and mixed reality in automated
driving on ScienceDirect, ACM Digital Library and IEEE Xplore
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
MUM 2020, November 22–25, 2020, Essen, Germany
©2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-8870-2/20/11. . . $15.00
https://doi.org/10.1145/3428361.3428390
and found more than 270 papers. Specically, the AutomotiveUI
community has published approx. 100 research papers (including
full and short papers, works in progress, workshops and demos)
utilizing mixed reality in vehicles, and shows an increasing trend
(2014: 15 publications, 2019: 47 publications).
Figure 1: Scientifc publications on mixed reality in auto-
mated driving, 2014-2019.
The reality-virtuality continuum [
91
] describes dierent levels
of reality, from the real world to a fully immersive virtual environ-
ment, the so-called virtual reality (VR), which solely consists of 3D
digital objects [
139
]. Mixed reality (MR) encompasses all variations
between the real world and the virtual environment. As such, aug-
mented reality (AR) is a technology in which virtual objects are
integrated into the real world in real time [
6
]. The emergence of the
Microsoft HoloLens, Magic Leap and similar AR technology allows
to research in-vehicle use of AR in a more immersive way [
69
],
thereby enabling researches to explore human-machine interaction
concepts more realistically using these AR devices in both lab as
well as eld studies. Advances in vehicle automation, such as the
latest Audi A8 equipped with SAE Level 3 (L3) assistance, provide
further use cases for AR applications to support work and enter-
tainment related activities [
113
,
151
] or increase trust in automated
driving [
49
,
153
]. Additionally, VR head-mounted displays (HMDs)
such as the HTC Vive and Oculus Quest, in conjunction with in-
teraction tracking technologies like Leap Motion for hand tracking
and Ultraleap Stratos for haptics provide immersive experiences in
119
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
(a) Goals and opportunities. (b) Accessibility challenges.
(c) Usability testing challenges. (d) Evaluation challenges.
Figure 2: Results of the workshop on MR in intelligent vehicles.
120
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
Table 1: MR applications for SAE level 3 and higher automated driving.
AR VR
Infotainment/Gamication [20, 76, 125, 138, 140, 148]
Navigation [7, 12, 12, 121, 141]
Trust [50, 146, 152, 153]
Feedback/Situation awareness [64, 70, 71, 123]
Mobile oce [59, 120, 134, 145]
Hazard indication [45, 143]
Depth perception [82, 84]
Task switching [35]
Passenger experiences [46]
Pedestrian interaction [10, 23, 25, 32, 44, 54, 79, 83, 90, 99]
Driving simulation [4, 9, 41, 42, 52, 77, 112, 124, 144]
Feedback/Situation awareness [3, 53, 78, 81, 150]
Tele-operated driving [9, 14, 40, 98]
Passenger experiences [11, 65, 87]
Navigation [66, 142]
Trust [92, 129, 146]
Motion sickness [31, 57]
Driving training [34, 137]
fully simulated digital environments. These advances in VR technol-
ogy can, for example, be used to simulate AR content [
110
]. Recent
research [56, 146] shows promises in this VR approach.
It has been more than ve years since Gabbard et al.’s landmark
research paper on driver challenges and opportunities for AR auto-
motive applications [
36
]. In 2014, technologies for AR in vehicles
were not yet technically mature for practical use in vehicles. Chal-
lenges that needed to be solved include computation and perception
issues, such as accurate capturing and interpretation of road ge-
ometry, precise vehicle positioning, compensation for vibrations,
driver monitoring and implementation of sophisticated algorithms
to create precise augmentation content in the driver’s eld of view,
and others. Additionally, physical, cognitive, and experiential issues
(e.g., focus change in AR systems causing eye strain; dissonance
in equilibrium sense causing simulator sickness; change blindness;
cognitive overload, driver and passenger experience) needed to be
tackled as well [
36
]. We studied all AutoUI publications since 2014
and found that an increasing number of them utilize mixed reality
technology for their user studies. Mixed reality technology in vehi-
cles has been researched for several years now and has shown to
have the ability to increase trust and safety in automated driving
as well as foster comfort of driving [7, 13, 49, 67, 68, 130, 153].
Table 1 gives an overview of selected augmented and virtual
reality (AR, VR) applications. VR is commonly used for prototyp-
ing, simulating pedestrian interactions and tele-operated driving in
highly specialized scenarios, which could be considered unsafe in
the real world. Additionally, passenger experiences are often mod-
eled in VR. On the other hand, AR is researched for infotainment
and gamication concepts in automated driving, as well as display-
ing navigation hints and hazard indications. More recently, mobile
work and well-being are researched topics in the AR domain.
The 2019 International Conference on Automotive User Inter-
faces and Interactive Vehicular Application [
1
] was accompanied
by a workshop on mixed reality in intelligent vehicles [
108
] with
the purpose of nding a holistic approach of MR use in driving. In
this paper, we present the results of the mixed reality workshop for
intelligent vehicles with focus on the practical usage of mixed real-
ity as well as future developments of this technology in intelligent
vehicles, categorized into opportunities and challenges.
Figure 2 shows our brainstorming and discussion results. The
main focus of the workshop was to identify challenges, subcate-
gorized into accessibility (Figure 2b), usability testing (Figure 2c)
and evaluation criteria (Figure 2d) as well as goals and opportunities
(Figure 2a), which are presented in detail in the following.
2 GOALS AND OPPORTUNITIES
Utilizing mixed reality technology can benet drivers, passengers
and pedestrians. One benet of MR is to personalize content pre-
sentations (e.g., using a head-mounted display or a large 3D AR
windshield display). In this section, we describe goals and opportu-
nities that MR technology delivers in combination with automated
vehicles.
2.1 Perception
An opportunity of MR technology used in intelligent vehicles is to
overlap information on the windshield, providing the benet for the
driver of avoiding to look down to dashboard displays and keep fo-
cussed on the road [
134
]. However, it is necessary to accommodate
the eyes between the outside environment and inside AR display.
In this regard, also the distance to project information onto must be
researched further [
82
,
84
]. In partial automation, AR head-up or
windshield displays provide assistance to human drivers by high-
lighting navigation and hazards [
45
,
143
]. For example, adjusting
the translucency of parts of the windshield display led to decreas-
ing braking times [
143
], and highlighting the location of harazd
on the WSD led to fewer shifts of visual focus and attention away
from the road [
45
]. Additionally, it is possible to personalize the
user interface using virtual/digital 3D objects rather than hardware
buttons and knobs. Initial research conducted by Haeuslschmid et
al. [
55
] and Riegler et al. [
113
] shows that drivers of automated vehi-
cles prefer personalized user interfaces tailored to their contextual
situation (e.g., work-related or entertainment-related activities).
2.2 Multimodal Interaction and Arbitration
Employment of MR technology in vehicles additionally opens the
possibility of new interaction modalities, i.e. how drivers and pas-
senger can control the car and perform non-driving related activ-
ities. Such interaction types include gestures [
116
], speech [
103
],
gaze [
95
,
107
], haptic controls [
47
] etc. For example, Lagoo et al. [
72
]
show that using gestures to interact with a HUD as opposed to a
head-down display (HDD) mitigates driver’s distraction, ultimately
leading to a reduction in collision occurrences. Moreover, there is
an opportunity to wear HMDs, such as the Microsoft HoloLens,
as opposed to built-in HUD displays [
8
]. Specically, Benz et al.
121
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
[
8
] found that projection-based displays induced less simulator
sickness in a real vehicle driving simulator. However, with future
technological advancements of HMDs, especially when consider-
ing reduced weight, head-tracking and stereoscopic vision, HMDs
might become a viable transport of information compared to HUDs.
2.3 Establish Guidelines and
Recommendations
In traditional UI and HCI design, Shneiderman’s Golden Rules [
131
]
and Norman’s Principles of Interaction Design [
100
] are well estab-
lished. Consequently, when it comes to designing MR applications,
guidelines and recommendations play an important role that may
dier from the existing, traditional ones (e.g., designing for desktop
and mobile devices). From tackling motion sickness to enhancing
user experience and vehicle safety, such standardized ”rules” must
be developed with high priority. Interaction design guidelines could,
for example, include the level of safety of MR interactions, the use
of central vs. peripheral information systems, and a design space
for MR applications [
149
]. The provision of UI AR widgets and tool
kits can further help to create a uniform design (e.g., fonts, symbols,
strokes) across dierent OEMs.
2.4 Support Non-Driving Related Task (NDRT)
Experiences
For highly automated vehicles, non-driving related tasks (NDRTs)
will replace the primary driving task. The transition from informa-
tion displays nowadays to entertainment and experience displays
in the future opens new possibilities for automotive UI designers.
For example, using VR technology, the outside surroundings can
be replaced with dynamically created digital content that adapts
to the vehicle’s driving behavior (direction, speed etc.) [
87
]. It is
foreseeable that games and other entertainment experiences will be
established in future vehicles. Gamication approaches can be used
to establish trust in automated driving [
50
,
138
,
148
]. For example,
Steinberger et al. [
138
] propose a combination of video game de-
sign theory and road safety psychology to engage drivers to drive
safer while maintaining fun by displaying 3D AR objects on the
windshield. Additionally, passengers in the back of the vehicle can
be made aware of car activities using MR enabled side windows
or displays. Co-operative AR user interfaces enable passengers in
the vehicle and external vehicles to communicate with each other
[
79
,
83
]. Furthermore, pedestrian-vehicle communication can be
achieved with MR technology using MR enabled windows and dis-
plays. The vehicle could augment its intention to pedestrians. For
example, Loecken et al. [
83
] investigated how automated vehicles
(AVs) should interact with pedestrians and found that displaying in-
formation on the street and using unambiguous signals (e.g., green
lights) has a positive impact on pedestrian crossing safety and user
experience. Additionally, MR can act as enabler for mobile oce
tasks as well [
120
]. Schmartmueller et al. [
120
] compared auditory
and head-up displays for work-related tasks and found that the
latter enable sequential multi-tasking as well as reduce workload
and improve productivity.
3 CHALLENGES
The goal of our research roadmap is to address the challenges in HCI
research with MR in automated driving. Therefore, the main focus
of our workshop was to identify challenges regarding user study
design and methods for evaluating MR applications and scenarios.
In the following, we present in detail the challenges of applying
MR technology in automated vehicles according to our workshop
participants.
3.1 Challenge 1: Accessibility
One of the biggest challenges for provisioning mixed reality in
automated vehicles is to make it accessible to their users. Drivers
must trust MR and see the benet of using it which requires context
awareness and the ability to explain the driving situation, and if
needed, provide take-over assistance.
3.1.1 Ensure Situation Awareness (SA). Mixed reality must be im-
plemented in a way that provides drivers with a sucient level of
situation awareness [
28
]. To this end, the driver’s status must be
monitored: ”Is the driver ready?” ”Has the driver checked all ob-
jects nearby?” The intelligent vehicle must take appropriate actions
should the driver ”fall out of the loop” [
115
]. Riener et al. [
115
]
state the need to identify how automotive HCI researchers can best
monitor and measure the driver states, how they can provide appro-
priate feedback depending on the states and how they can estimate
the driver’s response to that feedback. Riener et al. [
115
] foresee a
relaxation of visual display requirements in higher levels of vehicle
automation (SAE level 3 and 4), as opposed to eyes-o-the-road or
eyes-o-focus requirements for manual driving.
3.1.2 Interface Display. On the one hand, MR technology can be
used to highlight danger objects from the outside environment
on the interface display, such as a head-up or windshield display
[
45
,
143
]. Haeuslschmid et al. [
45
] propose to utilized AR wind-
shield displays to visualize warnings at the position of the hazard
and found that drivers employ a more calm gaze behavior (i.e.,
fewer shifts of visual focus and attention away from the road to
detect an equivalent amount of hazards) when compared to the
no display condition. On the other hand, a certain level of dimin-
ished reality can be achieved by using MR to adapt the interfaces
to the current state of the driver and environment. Sawabe et al.
[
117
,
118
] aim to reduce motion sickness in automated driving by
using vection illusion to induce the user to make a preliminary
movement against the real acceleration. Therefore, the intelligent
vehicle must be aware of the driving context. Furthermore, using
head-mounted displays within vehicles provides an opportunity to
have the information display show private information for each
occupant without violating their privacy.
3.1.3 Explainable UI. Regarding information transfer from the
vehicle to the driver/ passengers, MR can be used to convey the
intent of the vehicle: ”What does the vehicle want to do?” This way,
system knowledge and actions are made clear to the occupants of
the vehicle. Explainable UI can be utilized for automated driving
systems and shows whether the articial intellgence (AI) is able
to take over the driving control and the reasons for doing to [
64
].
MR projects what the car is ”thinking” and gives an opportunity
to the driver/passenger to decide the next action [
71
,
123
]. Other
122
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
than for driving information, MR can be employed for work-related
or entertainment purposes, such as immersive communication or
video games [
104
]. Additionally, MR has the potential to provide
pedestrians with information, such as route guidance [7].
3.1.4 Trust and Acceptance. MR can further be applied to leverage
and increase trust in automated driving. We have to consider trust
while keeping the driver in the loop. ”When should the informa-
tion ow be reduced?” Therefore, we must further investigate as
to how trust can be managed with MR. For example, trust can be
improved via display of behaviors of the vehicle, i.e., transparency
of its actions [
49
,
152
]. Wintersberger et al. [
152
] stress that as
more and more fully automated vehicles emerge into our trac
systems, it will be very important to implement means for fostering
user acceptance and trust. Wintersberger et al. [
152
] utilized AR
aids in fully automated driving aiming to foster trust by increasing
system transparency and communication of upcoming maneuvers
and found that augmenting trac objects relevant for a driving
scenario can increase user trust and that users felt anxious with-
out AR indicators. Similarly, Haeuslschmid et al. [
49
] use AR to
visualize the car’s interpretation of the current situation and its
corresponding actions by means of a chaueur avatar and a minia-
ture representation of the road environment and found that such
visualizations can increase trust in automated driving.
3.1.5 Risk Awareness. Especially for SAE level 3 and 4, MR technol-
ogy can be applied to make the driver risk aware by pointing out
the vehicle’s level of condence [
126
]. Schroeter et al. [
126
] present
an approach to increase situational awareness by the means of AR
to carefully design applications focused on amplication and volun-
tary attention. Their example application, Pokémon DRIVE, utilizes
game design elements such as challenge, rewards, and narratives
to direct driver attention to road hazards. The driver then needs to
perform an interaction to acknowledge the AR harzard (e.g., swipe
action on steering wheel). However, modeling risk awareness in
terms of user interface design is challenging: ”What objects from the
outside environment should be visualized/highlighted by MR? All
vehicles and pedestrians, or just moving vehicles and pedestrians
intending to cross the street?”.
3.1.6 Motion Sickness. One of the most important challenges when
implementing MR in automated vehicles is to avoid motion sickness
[
30
]. As Diels et al. [
30
] state, envisioned scenarios for self-driving
cars will lead to an increased risk of motion sickness which will
negatively aect safety and user acceptance. They further remark
that self-driving cars cannot simply be thought of as living rooms
on wheels. Since the primary driving task is no longer with the
driver, it is essential to combat sickness occurring during NDRTs.
Additionally, when using VR in automated vehicles, the overlay
of virtual content must comply with the movement of the vehicle.
Low latency between the real movement of the vehicle and the dig-
ital representation must therefore be ensured [
42
]. For automated
vehicles, the use of MR provides the benet to reduce motion sick-
ness by showing navigation cues to the occupants of the vehicle,
thereby allowing them to prepare for sharp turns or overtaking
maneuvers. For example, McGill et al. [
88
] investigated the use
of VR HMDs in automated driving scenarios to combat motion
sickness. They found that there is no one best presentation with
respect to balancing sickness and immersion, however, according
to user preferences, dierent solutions are required for dierently
susceptible users to provide usable in-vehicle VR.
3.1.7 Inaentional Blindness. The use of MR technology in the ve-
hicle should not cause inattentional blindness, i.e. when the driver
fails to perceive visual information that is relevant, detectable and
within the useful eld of view [
62
]. Especially in SAE level 3 driv-
ing scenarios, driver attention is critical and dicult to maintain.
Therefore, designers of MR applications should strive for reducing
unnecessary distractions for drivers. The ability for the driver to
personalize their display(s) could inuence inattentional blindness
[
55
,
109
,
114
]. Haeuslschmid et al. [
55
] let drivers put together their
own personalized layout (e.g., music player) in manual driving and
found that personalized layouts resulted in less safe driving con-
ditions (worse response rates and times). In SAE level 3 driving,
drivers seem to be aware of the limitations of vehicle automation
when it comes to potential take-overs and this circumstance is re-
ected in personalized layouts. Work-related information is placed
in more peripheral areas of the windshield display in SAE level
3 driving, while it is place more centrally in SAE level 5 vehicle
automation [114].
3.1.8 Training. In addition, MR can facilitate driving training out-
side of real-world environments. For higher levels of vehicle au-
tomation (SAE level 3 and up), even a new driving license may be
needed. Utilizing MR driving simulators has the benet to reduce ac-
cident risks for beginners and simulate trac scenarios that would
be hard to nd or reproduce in the real world. VR driving simulators
have the ability to visualize various settings, from mountainous
terrain to densely populated cities [
9
,
77
,
112
]. Several open source
VR driving simulators with focus on HCI research were recently
released to facilitate these research eorts [41, 110].
3.1.9 Take-Over. For take-over maneuvers, MR can also be applied
[
80
,
111
]. Lindemann et al. [
80
] examine the eects of using an AR
interface with world-relative visualizations to assist drivers before a
take-over and found that the AR interface, compared to a traditional
head-down display, resulted in higher lateral performance and re-
duced workload in situations where steering is required directly
after a take-over as well as better comfort of use and helpfulness. In
case of partial vehicle automation, MR can also be employed to visu-
alize take-over requests. In this regard, all recognized and relevant
objects of the outside environment (e.g., cars, road, pedestrians) can
be highlighted for the benet of the driver to focus on the take-over.
Schall et al. [
119
] visualize AR cues to increase driving safety among
elderly drivers who are at higher crash risk because of cognitive
impairments. Using these AR cues led to improved detection of
hazardous target objects of low visibility while not interfering with
detection of nonhazardous secondary objects. Further exploration
of take-overs using MR, especially while drivers perform NDRTs,
would therefore be advisable.
3.1.10 Communication with the Outside World. Moreover, MR ap-
plications can convey information to people outside and interaction
with pedestrians [
24
,
48
]. Examples are playful interactions (while
pedestrians are waiting at a red light, [
122
]) or helpful information
(e.g., helping tourists navigate to their destination; displaying a
warning that danger is ahead, [
29
,
135
]). MR technology enables
123
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
both private and shared spaces and thereby facilitates social connec-
tions. In addition to vehicle to pedestrian communication, pedes-
trians can communicate with each other via the MR technology of
an intelligent vehicle. For example, the large windows/displays of
vehicles enable pedestrians to share and visualize more information
that by using smartphones. Vehicle to environment communication
enables further use cases, such as notifying other road users (cars,
cyclists etc.) of critical events (e.g., accidents, trac jam).
3.1.11 Work, Entertainment and Well-Being. MR for work and en-
tertainment purposes can be advantageous for highly automated
driving (SAE level 3 and higher) [
114
,
151
]. The outside environ-
ment could be completely overlaid with a virtual scene in real time.
For example, instead of driving through a city, the passengers of the
vehicle would see a visualization of a beautiful beach. For shared ve-
hicles, MR technology enables information displays tailored to the
personalization needs of each ”user” of the vehicle. Instead of hav-
ing a ”one size ts all” solution, MR provides customizable displays
according to the needs of the driver/passengers [
55
]. Haeuslschmid
et al. [
55
] investigated driver needs for personalizable content on
a windshield display in order to shift the urge of drivers to access
content on smartphones while driving. Their ndings show that
drivers do prefer personalization, although such layouts may not
be safe, which must be investigated further. Additionally, dierent
visualization techniques using AR windshield displays might en-
able drivers to work more productively in a vehicular environment,
when the digital content blends into the outside environment [
111
].
In an exploratory study by Riegler et al. [
114
], compared text com-
prehension and take-over performance on screen-xed HUDs vs.
world-relative displays and found that world-relative visualizations
achieved better task performance and faster take-over times.
3.1.12 Hardware and Technological Limitations. Another challenge
is the resolution of AR head-up displays which are currently small
and oer limited space for information presentation. Microsoft re-
cently released the HoloLens 2 HMD which has a vastly improved
performance and eld of view than its predecessor, making it more
attractive for AR explorations in automated vehicle HCI research
[
147
]. Moreover, the entire vehicle specication must be regarded,
such as available (graphics) processors, bandwidth and other de-
vices. A holistic approach towards hardware related solutions is
therefore needed.
3.1.13 Laws and Regulations. Further challenges about the em-
ployment of MR technology include laws and regulations [
132
,
133
]
which limit the potential use cases for this technology, as well as the
connection between researchers, hard- and software suppliers and
OEMs. Smith [
132
,
133
] presents steps that governments can take
to encourage the development, deployment and use of automated
road vehicles, as future mobility concepts emerge and research,
development, demonstration, and deployment of these automated
driving technologies accelerate.
3.2 Challenge 2: Usability Testing
Usability testing for MR scenarios can dier from traditional user
studies, as many user are not yet familiar with MR devices, such as
head-mounted displays or CAVEs and how to interact with them.
In this section, we present methods and measures our workshop
participants came up with to adopt usability testing for mixed
reality applications.
3.2.1 Methods. Methods for testing MR scenarios which have been
suggested or applied so far include:
Rapid Prototyping [5, 61, 75, 112, 136]
Wizard of Oz [74, 94, 96]
Testbeds [19, 58, 85, 86, 105, 106]
Benchmark Scenarios [89]
MR concepts and use cases should be designed for each level of
driving automation. The question arises if the same scenarios or
dierent ones should be used when interacting with MR content.
For example, do we have to dene new interaction modalities with
MR as opposed to traditional dashboard displays, for example. Ad-
ditionally, user testing methods should be usable across dierent
scenarios and domains. VR setups further simplify and speed up the
prototyping process. Using VR technology, such content (VR appli-
cations) can easily be made universally available across research
institutions and makes research more reproducible.
3.2.2 Data Sets. Data sets of (published) user studies should be
made available as open source in order to support their repro-
ducibility [
21
]. Going further, research institutions working to-
gether across studies can provide and access these data sets to more
rapidly test methods and develop new ones. Additionally, open
source provision of simulations and scenarios would be benecial
(e.g., [
41
,
112
]). The consensus was that a coordinated approach
can greatly speed up the HCI research process on MR scenarios
(e.g., making MR passenger experiences openly available - which
could simply be a Unity scene).
3.2.3 Simulations. Commonly, user studies on automated driv-
ing are either conducted using high delity driving simulators
with motion platforms or even prototypical automated vehicles on
test grounds, or low quality setups using 2D monitors. The ques-
tion arises if currently available MR technology and delity can
replace on-road evaluations, and what modalities are needed for
good MR testing environments [
9
,
27
,
77
,
112
,
124
]. A number of
(open-source) driving simulation testbeds has been developed with
dierent goals, such as a 360
video-based driving simulator by
Gerber and Schroeter [
41
,
124
] or one with focus on windshield dis-
play and multimodal interaction research by Riegler et al. [
110
,
112
].
Comparatively, access to MR equipment is cost-ecient and can
therefore more easily be facilitated for HCI researchers than expen-
sive driving simulators. Further, simulations using MR technology
must be evaluated if they are also valid in real-world situations. For
example, how does VR/simulated AR translate to ”real” AR (e.g.,
depth, refocusing, parallax eect etc.). Goedicke et al. [
42
] devel-
oped a virtual reality simulator that can be used in real vehicles,
thereby achieving higher levels of immersion. The main challenge
for simulations is to nd the level delity necessary for MR content
to be considered valid.
3.2.4 Measures. User testing measures should be a mixed-modality
design, applying method triangulation [
102
], such that multiple
measures are considered at a time (e.g., self-developed/standardized
questionnaires, activity logging and semi-structured interviews).
Pettersson et al. [
102
] investigated 280 papers and found that method
124
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
triangulation was applied in more than two thirds of the user stud-
ies and concluded that, for stronger results in UX studies, collected
data should be integrated and structured better. The consensus was
that the following measures should be included in MR user tests:
Motion or simulator sickness, e.g., using the Simulator Sick-
ness Questionnaire (SSQ) [30, 63]
Physiological measures, e.g., heart-rate variability and gal-
vanic skin response [18]
Workload, e.g., self-rating scales using the NASA Task Load
Index (TLX) [18]
Level of immersion or presence, e.g., using the Igroup Pres-
ence Questionnaire (IPQ) [15, 127]
Oculometric measures, such as glance or gaze behavior [
17
,
95]
3.2.5 User Experience and Preference. User testing for MR applica-
tions should include user experience (UX) data and user preferences,
however, preference is not necessarily equal to better or safer de-
sign. Therefore, safety aspects should also be evaluated in user
studies. For example, depending on the level of vehicle automa-
tion [
26
], take-over request (TOR) performance could be evaluated
[
89
]. Concrete TOR performance measures would be the reaction
time, maximum longitudinal and lateral accelerations, and time
to collision [
43
,
89
]. Furthermore, MR technology should not be
conned to in-vehicle driver/driving experiences, but also extended
to passengers (e.g., [
46
]) and pedestrians (e.g., [
44
,
46
,
83
]). For ex-
ample, Häkkilä et al. [
46
] propose a passenger concept for AR social
car windows that focus on the surroundings, entertainment and
social aspects of the journey. As for pedestrian user experiences,
for example, Gruenefeld et al. [
44
] investigate dierent interaction
modalities between vulnerable road users and automated vehicles,
such as vehicles displaying their intentions or reacting to pedes-
trians’ gestures. To this end, they created a VR testbed to evaluate
such experiences.
3.3 Challenge 3: Evaluation Criteria
Evaluation criteria for MR scenarios dier from traditional user
studies, as extra care and caution is needed for using and interacting
with MR devices. Inexperienced MR users must be introduced to this
technology, and simulator sickness could occur if the technology is
not implemented correctly. In this section, we present evaluation
criteria for mixed reality centered user studies.
3.3.1 Safety and Human State. While MR oers the possibility
to increase safety, we must consider challenges like information
density, cognitive overload and visual distractions. Keeping the
driver in the loop [
128
] must therefore be a high priority for MR
application in vehicles. Seppelt et al. [
128
] highlight the importance
of continuous feedback on the state and behavior of automation as
well as the feedack design needed to direct attention to informing
rather than alerting drivers. Further, task performance as well as
situation awareness measures should be dened for MR activities;
i.e. driver response to hazards and glance behavior [
28
,
35
]. De
Winter et al. [
28
] investigated workload and SA during highly au-
tomated driving and found that drivers are likely to pick up tasks
that are unrelated to driving. High vehicle automation can lead to
improved SA compared to manual driving if drivers are encouraged
to detect objects in the environment. In this context, MR can be
exploited to increase the driver’s situation awareness and take-over
performance (e.g., [
60
,
95
,
101
]). For example, Langlois et al. [
73
]
compare AR and classical HUDs displaying navigational informa-
tion and found that AR both improves driving comfort and helps
better anticipate lane change maneuvers when performing take-
overs. Riegler et al. [
111
] found that world-relative visualization of
(work- or entertainment related) text content can improve task and
take-over performance compared to traditional screen-xed HUDs.
3.3.2 User-centered Criteria. Regarding user-centered criteria, trust
and acceptance of MR interactions, usability and user experience
were regarded as important. Further criteria include previous ex-
perience with this technology and in this regard the ”wow” eect
of MR. Additionally, MR enables adaptive user interfaces based on
dierent user proles and preferences. User tech-savviness plays
an important role with MR research, as study participants should
not be immediately confronted with this technology, but rst in-
troduced and given the chance to experience it before conducting
user studies [
22
]. "Warm-up" strategies to familiarize users with
the MR environment are therefore recommended [
22
]. Moreover,
even when conducting user studies in real vehicles, utilizing VR
environments prior to the real environment has shown to yield a
better performance [22].
3.3.3 Simulator and Motion Sickness. When conducting user stud-
ies on MR in intelligent vehicles, simulator and motion sickness
must be considered. Motion sickness may occur when stationary
users experience self-motion or when lags between head move-
ments and presentation of the visual display take place [51]. Prior
to conducting a user study, a pre-screening should be conducted
to test if a study participant has existing issues, such as disorien-
tation or nausea [
147
]. To assess simulator sickness among study
participants, the simulator sickness questionnaire (SSQ) can be
used [
63
]. To this end, the duration and immersion of user studies
must carefully be planned, as these factors may lead to nausea, fa-
tigue, headache, dizziness, etc. [
97
]. Murata [
97
] found that longer
immersion in a VR environment induced postural instability and
symptoms of motion sickness. Consequently, adaptive UIs could be
used to limit motion sickness [
33
]. For example, Draper et al. [
33
]
examined simulator sickness in virtual environments and found
that the vestibular system is designed to detect and react to the
position and motion of the head in space. When creating VR sim-
ulations, it is therefore necessary to coordinate motor function,
ensure correct eye movements and maintain posture in the vir-
tual environment in order to reduce health and safety issues [
33
].
Should motion or simulator sickness occur, the cause of these eects
(e.g., hardware issues or the actual concepts) must be identied.
Therefore, the level of immersion and presence should be as appro-
priately high: Closeness of the scenarios to real life situations makes
them representable and universally applicable [
16
]. As Bowman
and McMahan [
16
] state, full immersion is not always necessary;
the goal of immersive virtual environments is to let the user ex-
perience a computer-generated world as if it were real, thereby
producing a sense of presence.
3.3.4 Hardware. Additionally, not just for evaluating motion or
simulator sickness, but also for the level of realism, the physical car
125
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
state (e.g., acceleration, sounds), immersion and presence should
be as consistent and generalizable to reality as possible [
16
]. Bow-
man and McMahan [
16
] propose a multidimensional approach for
immersion and presence in VR in which VR designers should strive
for a balance of spatial understanding, information clutter and pe-
ripheral awareness. This notion should especially be emphasized
for VR studies on AR or MR usage, as hardware devices and se-
tups (e.g., head-mounted displays, CAVEs) have limitations (e.g.,
resolution, eld of view, refresh rate, display technology). Multi-
sensory tangibles can be used support direct interaction with the
real world by enabling the use of real, physical objects and tools
[
93
]. In VR driving simulators, for example, this could be achieved
using a hardware steering wheel and pedals in combination with
a virtual representation of the user’s hands [
110
]. Furthermore,
MR concepts should be coherent and holistic rather than multiple
single concepts put together, even though such an approach makes
it harder to isolate eects or evaluate the interference of factors
(e.g., motion sickness, user experience, performance).
4 RESEARCH ROADMAP
Researchers working in the automotive UI domain may further ex-
plore mixed reality technology in the context of automated driving.
However, there is still a number of open issues to clarify prior to
a broad application of AR technology in vehicles, e.g., questions
addressing the use of head-mounted displays or the utilization of
windshield displays. Also, topics like context awareness, informa-
tion relevancy as well as view management are research topics of
interest. In addition, further interaction modalities in combination
with AR content should be researched for the creation of natural ve-
hicle user interfaces (e.g., nger/ hand gestures, speech commands,
gaze input). Vehicle-to-vehicle or pedestrian-to-vehicle interaction
gained lot of interest recently and should be further investigated in
the light of MR (e.g., communication strategies in the exterior). The
technical progress also extends the use cases of mixed reality ap-
plications with implications on new human factors research elds,
e.g., passenger or entertaining and work experiences.
Gartner’s hype cycle [
37
] is a graphical representation of a com-
mon pattern that occurs with emerging technologies, showing the
current maturity status of these technologies. The hype cycle can es-
timate the growth or depletion of a given technology. Back in 2014,
both augmented and virtual reality were in the phase of ”Trough
of Disillusionment”, meaning that these technologies started to be
rigorously tested with experiments and implementations [
38
]. In
2019, both AR and VR had disappeared from the Gartner hype cycle
for emerging technologies [
39
], indicating, that for Gartner, mixed
reality was not an emerging and experimental technology anymore,
but it had already matured, i.e., become usable and useful. However,
autonomous driving at SAE level 4 has reached the ”Trough of Dis-
illusionment”. For AutomotiveUI HCI researchers, this implies that
MR will enter (partially) vehicles in the foreseeable future, and, with
its goals and opportunities in mind, the mentioned challenges must
be addressed swiftly. For example, Daimler has recently announced
and showcased an AR HUD which will display world-relative navi-
gational cues as well as vehicle-related information to the driver as
part of their MBUX (Mercedes-Benz User Experience) suite [2].
Our workshop and subsequent investigations have several impli-
cations for future mixed reality applications for automated driving.
Based on the presented opportunities and challenges, we dene
the following overarching action points for HCI researchers and
AR/VR designers in the automotive domain:
View management
for MR applications (SAE L3-L5). MR
technology paired with automated driving open the prospect
to transition away from traditional hardware-based knobs
and buttons to application-specic UI elements. AR head-up
and windshield displays may be an enabler for minimalistic
vehicle interiors and app-centered user experiences.
Interaction modalities
for MR-based NDRTs (SAE L3-L5),
i.e., a holistic rather than a standalone approach must be
explored for interacting with MR applications. A holistic ap-
proach would include gaze, gestural, haptic, speech/auditory,
olfactory and more interaction and feedback types. Using
such an integrated approach in combination with the proper
view management has the potential to improve the driver’s
situation awareness and reduce motion sickness.
Ensuring situation awareness while performing NDRTS
(SAE L3 and L4). It is necessary to consistently monitor the
driver state (e.g., using physiological measures) to allow for
fast and successful take-overs. The driver needs to be aware
of potential hazards. In this regard, context awareness must
be investigated further to ensure both safe and pleasant
transitions from automated to manual driving.
Passenger experiences.
The shift from manual to auto-
mated driving transforms active drivers to passive passen-
gers. The use of AR glasses and VR HMDs should be explored
further for combating motion sickness and creating practical
in-car work- and entertainment related experiences. Gami-
cation concepts should be researched further to explore their
potential in increasing the trust, acceptance and eventually
popularity in automated vehicles.
Pedestrian interactions.
External vehicle interfaces must
be researched further to allow safe interaction with vulnera-
ble road users, such das pedestrians and cyclists.
Simulation and testing.
Due to the unlimited number of
potential MR applications for in-vehicle and external usage,
safety and usability testing criteria as well as simulations
must be dened. Data sets should be made available to HCI
researchers to ensure continuous development and improve-
ment of these criteria.
5 CONCLUSION
Mixed reality applications for in-vehicle driver and passenger expe-
riences as well as external pedestrian experiences are near a turning
point. Usable MR display technology for the entertainment industry
is already available, and recent prototypes of windshield-based op-
tical see-through displays showcased by automotive manufacturers
and suppliers in combination with higher levels of vehicle automa-
tion may facilitate commercially available systems within the next
few years. The most successful automotive MR applications will
have to consider a wide range of perceptual and cognitive issues
within the (semi-automated) driving context. We present some of
126
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
these challenges based on our expert workshop, and outline a re-
search agenda. However, much more research is needed to ensure
safe and practical MR applications for future mobility.
ACKNOWLEDGMENTS
This work was supported by the University of Applied Sciences
PhD program and research subsidies granted by the government of
Upper Austria.
REFERENCES
[1]
2019. AutomotiveUI ’19: Proceedings of the 11th International Conference on Auto-
motive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings
(Utrecht, Netherlands). Association for Computing Machinery, New York, NY,
USA.
[2]
Daimler AG. 2020. Meet the S-Class DIGITAL: ”My MBUX”. https://bit.ly/
32fu9RU. Accessed: 2020-08-20.
[3]
Mayank Agrawal and Kavita Vemuri. 2019. Analyzing High Decibel Honking
Eect on Driving Behavior Using VR and Bio-Sensors. In Proceedings of the 11th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications: Adjunct Proceedings (Utrecht, Netherlands) (AutomotiveUI ’19).
Association for Computing Machinery, New York, NY, USA, 81–85. https:
//doi.org/10.1145/3349263.3351516
[4]
Ignacio Alvarez, Laura Rumbel, and Robert Adams. 2015. Skyline: A Rapid
Prototyping Driving Simulator for User Experience. In Proceedings of the 7th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Nottingham, United Kingdom) (AutomotiveUI ’15). Association for
Computing Machinery, New York, NY, USA, 101–108. https://doi.org/10.1145/
2799250.2799290
[5]
Sang-Gyun An, Yongkwan Kim, Joon Hyub Lee, and Seok-Hyung Bae. 2017.
Collaborative Experience Prototyping of Automotive Interior in VR with 3D
Sketching and Haptic Helpers. In Proceedings of the 9th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (Oldenburg,
Germany) (AutomotiveUI ’17). Association for Computing Machinery, New York,
NY, USA, 183–192. https://doi.org/10.1145/3122986.3123002
[6]
Ronald T Azuma. 1997. A survey of augmented reality. Presence: Teleoperators
& Virtual Environments 6, 4 (1997), 355–385.
[7]
Karlin Bark, Cuong Tran, Kikuo Fujimura, and Victor Ng-Thow-Hing. 2014.
Personal Navi: Benets of an augmented reality navigational aid using a see-
thru 3D volumetric HUD. In Proceedings of the 6th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications. ACM, 1–8.
[8]
Tobias M. Benz, Bernhard Riedl, and Lewis L. Chuang. 2019. Projection Dis-
plays Induce Less Simulator Sickness than Head-Mounted Displays in a Real
Vehicle Driving Simulator. In Proceedings of the 11th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (Utrecht,
Netherlands) (AutomotiveUI ’19). Association for Computing Machinery, New
York, NY, USA, 379–387. https://doi.org/10.1145/3342197.3344515
[9]
Konrad Bielecki, Marten Bloch, Robin Schmidt, Daniel López Hernández, Marcel
Baltzer, and Frank Flemisch. 2019. Tangible Virtual Reality in a Multi-User Envi-
ronment. In Proceedings of the 11th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications: Adjunct Proceedings (Utrecht,
Netherlands) (AutomotiveUI ’19). Association for Computing Machinery, New
York, NY, USA, 76–80. https://doi.org/10.1145/3349263.3351499
[10]
Marc-Philipp Böckle, Anna Pernestål Brenden, Maria Klingegård, Azra Habi-
bovic, and Martijn Bout. 2017. SAV2P: Exploring the Impact of an Interface
for Shared Automated Vehicles on Pedestrians’ Experience. In Proceedings
of the 9th International Conference on Automotive User Interfaces and Inter-
active Vehicular Applications Adjunct (Oldenburg, Germany) (AutomotiveUI
’17). Association for Computing Machinery, New York, NY, USA, 136–140.
https://doi.org/10.1145/3131726.3131765
[11]
Laura Bo, Philipp Wintersberger, Paola Cesaretti, Giuseppe Mincolelli, and
Andreas Riener. 2019. The First Co-Drive Experience Prototype. In Proceedings
of the 11th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications: Adjunct Proceedings (Utrecht, Netherlands) (AutomotiveUI
’19). Association for Computing Machinery, New York, NY, USA, 254–259. https:
//doi.org/10.1145/3349263.3351318
[12]
Adam Bolton, Gary Burnett, and David R Large. 2015. An Investigation of
Augmented Reality Presentations of Landmark-Based Navigation Using a Head-
up Display. In Proceedings of the 7th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (Nottingham, United Kingdom)
(AutomotiveUI ’15). Association for Computing Machinery, New York, NY, USA,
56–63. https://doi.org/10.1145/2799250.2799253
[13]
Adam Bolton, Gary Burnett, and David R Large. 2015. An investigation of
augmented reality presentations of landmark-based navigation using a head-up
display. In Proceedings of the 7th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications. ACM, 56–63.
[14]
Martijn Bout, Anna Pernestål Brenden, Maria Klingegård, Azra Habibovic, and
Marc-Philipp Böckle. 2017. A Head-Mounted Display to Support Teleoperations
of Shared Automated Vehicles. In Proceedings of the 9th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications Adjunct (Old-
enburg, Germany) (AutomotiveUI ’17). Association for Computing Machinery,
New York, NY, USA, 62–66. https://doi.org/10.1145/3131726.3131758
[15]
D. A. Bowman and R. P. McMahan. 2007. Virtual Reality: How Much Immersion
Is Enough? Computer 40, 7 (2007), 36–43.
[16]
Doug A Bowman and Ryan P McMahan. 2007. Virtual reality: how much
immersion is enough? Computer 40, 7 (2007), 36–43.
[17]
Roland Bremond, Jean Michel Auberlet, Viola Cavallo, Lara Désiré, Vérane
FAURE, Sophie Lemonnier, Régis Lobjois, and Jean Philippe Tarel. 2014. Where
we look when we drive: A multidisciplinary approach. In TRA - Transport
Research Arena. Paris, France, 10p. https://hal.archives-ouvertes.fr/hal- 01044746
[18]
Karel A. Brookhuis and Dick [de Waard]. 2010. Monitoring drivers’ mental
workload in driving simulators using physiological measures. Accident Analysis
and Prevention 42, 3 (2010), 898 – 903. https://doi.org/10.1016/j.aap.2009.06.001
Assessing Safety with Driving Simulators.
[19]
Nora Broy, Florian Alt, Stefan Schneegass, and Bastian Peging. 2014. 3D Dis-
plays in Cars: Exploring the User Performance for a Stereoscopic Instrument
Cluster. In Proceedings of the 6th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (Seattle, WA, USA) (Automo-
tiveUI ’14). Association for Computing Machinery, New York, NY, USA, 1–9.
https://doi.org/10.1145/2667317.2667319
[20]
Nora Broy, Mengbing Guo, Stefan Schneegass, Bastian Peging, and Florian
Alt. 2015. Introducing Novel Technologies in the Car: Conducting a Real-World
Study to Test 3D Dashboards. In Proceedings of the 7th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (Nottingham,
United Kingdom) (AutomotiveUI ’15). Association for Computing Machinery,
New York, NY, USA, 179–186. https://doi.org/10.1145/2799250.2799280
[21] Stuart Buck. 2015. Solving reproducibility.
[22]
Dan Calatayud, Sonal Arora, Rajesh Aggarwal, Irina Kruglikova, Svend Schulze,
Peter Funch-Jensen, and Teodor Grantcharov. 2010. Warm-up in a virtual reality
environment improves performance in the operating room. Annals of surgery
251, 6 (2010), 1181–1185.
[23]
Chia-Ming Chang, Koki Toda, Daisuke Sakamoto,and Takeo Igarashi. 2017. Eyes
on a Car: An Interface Design for Communication between an Autonomous
Car and a Pedestrian. In Proceedings of the 9th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications (Oldenburg,
Germany) (AutomotiveUI ’17). Association for Computing Machinery, New York,
NY, USA, 65–73. https://doi.org/10.1145/3122986.3122989
[24]
Ashley Colley, Jonna Häkkilä, Bastian Peging, and Florian Alt. 2017. A Design
Space for External Displays on Cars. In Proceedings of the 9th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
Adjunct (Oldenburg, Germany) (AutomotiveUI ’17). Association for Computing
Machinery, New York, NY, USA, 146–151. https://doi.org/10.1145/3131726.
3131760
[25]
Mark Colley, Marcel Walch, and Enrico Rukzio. 2019. For a Better (Simulated)
World: Considerations for VR in External Communication Research. In Pro-
ceedings of the 11th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications: Adjunct Proceedings (Utrecht, Netherlands)
(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
442–449. https://doi.org/10.1145/3349263.3351523
[26]
SAE On-Road Automated Vehicle Standards Committee. 2018. Taxonomy and
denitions for terms related to on-road motor vehicle automated driving sys-
tems.
[27]
Jan Conrad, Dieter Wallach, Arthur Barz, Daniel Kerpen, Tobias Puderer, and
Andreas Weisenburg. 2019. Concept Simulator K3F: A Flexible Framework for
Driving Simulations. In Proceedings of the 11th International Conference on Auto-
motive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings
(Utrecht, Netherlands) (AutomotiveUI ’19). Association for Computing Machin-
ery, New York, NY, USA, 498–501. https://doi.org/10.1145/3349263.3349596
[28]
Joost C.F. [de Winter], Riender Happee, Marieke H. Martens, and Neville A.
Stanton. 2014. Eects of adaptive cruise control and highly automated driving
on workload and situation awareness: A review of the empirical evidence.
Transportation Research Part F: Trac Psychology and Behaviour 27 (2014), 196 –
217. https://doi.org/10.1016/j.trf.2014.06.016 Vehicle Automation and Driver
Behaviour.
[29]
Shuchisnigdha Deb, Lesley J Strawderman, and Daniel W Carruth. 2018. In-
vestigating pedestrian suggestions for external features on fully autonomous
vehicles: A virtual reality experiment. Transportation research part F: trac
psychology and behaviour 59 (2018), 135–149.
[30]
Cyriel Diels and Jelte E. Bos. 2016. Self-driving carsickness. Applied Ergonomics
53 (2016), 374 – 382. https://doi.org/10.1016/j.apergo.2015.09.009 Transport in
the 21st Century: The Application of Human Factors to Future User Needs.
127
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
[31]
Cyriel Diels, Jelte E. Bos, Katharina Hottelart, and Patrice Reilhac. 2016. Motion
Sickness in Automated Vehicles: The Elephant in the Room. Springer International
Publishing, Cham, 121–129. https://doi.org/10.1007/978-3- 319-40503-2_10
[32]
Igor Doric, Anna-Katharina Frison, Philipp Wintersberger, Andreas Riener,
Sebastian Wittmann, Matheus Zimmermann, and Thomas Brandmeier. 2016. A
Novel Approach for Researching Crossing Behavior and Risk Acceptance: The
Pedestrian Simulator. In Adjunct Proceedings of the 8th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (Ann Arbor,
MI, USA) (AutomotiveUI ’16 Adjunct). Association for Computing Machinery,
New York, NY, USA, 39–44. https://doi.org/10.1145/3004323.3004324
[33]
Mark H Draper. 1998. The Adaptive Eects Of Virtual Interfaces: Vestibulo-Ocular
Reex and Simulator Sickness. Technical Report. AIR FORCE INST OF TECH
WRIGHT-PATTERSONAFB OH.
[34]
Mahdi Ebnali, Cyrus Kian, Majid Ebnali-Heidari, and Adel Mazloumi. 2020. User
Experience in Immersive VR-Based Serious Game: An Application in Highly
Automated Driving Training. In Advances in Human Factors of Transportation,
Neville Stanton (Ed.). Springer International Publishing, Cham, 133–144.
[35]
Nadia Fereydooni, Orit Shaer, and Andrew L. Kun. 2019. Switching between
Augmented Reality and a Manual-Visual Task: A Preliminary Study. In Pro-
ceedings of the 11th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications: Adjunct Proceedings (Utrecht, Netherlands)
(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
99–103. https://doi.org/10.1145/3349263.3351502
[36]
Joseph L Gabbard, Gregory M Fitch, and Hyungil Kim. 2014. Behind the Glass:
Driver challenges and opportunities for AR automotive applications. Proc. IEEE
102, 2 (2014), 124–136.
[37]
Inc. Gartner. [n.d.]. Interpreting Technology Hype. https://w ww.gartner.com/
en/research/methodologies/gartner-hype- cycle. Accessed: 2020-05-07.
[38]
Inc. Gartner. 2014. Gartner Hype Cycle for Emerging Technologies.
https://www.gartner.com/en/documents/2809728/hype-cycle-for- emerging-
technologies-2014. Accessed: 2020-05-07.
[39]
Inc. Gartner. 2019. Gartner Hype Cycle for Emerging Technologies.
https://www.gartner.com/en/documents/3956015/hype-cycle-for- emerging-
technologies-2019. Accessed: 2020-05-07.
[40]
Jean-Michael Georg, Johannes Feiler, Frank Diermeyer, and Markus Lienkamp.
2018. Teleoperated Driving, a Key Technology for Automated Driving? Com-
parison of Actual Test Drives with a Head Mounted Display and Conventional
Monitors. In 2018 21st International Conference on Intelligent Transportation
Systems (ITSC). IEEE, 3403–3408.
[41]
Michael A. Gerber, Ronald Schroeter, and Julia Vehns. 2019. A Video-Based
Automated Driving Simulator for Automotive UI Prototyping,UX and Behaviour
Research. In Proceedings of the 11th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications (Utrecht, Netherlands)
(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
14–23. https://doi.org/10.1145/3342197.3344533
[42]
David Goedicke, Jamy Li, Vanessa Evers, and Wendy Ju. 2018. Vr-oom: Virtual
reality on-road driving simulation. In Proceedings of the 2018 CHI Conference on
Human Factors in Computing Systems. 1–11.
[43]
Christian Gold, Moritz Körber, David Lechner, and Klaus Bengler. 2016. Taking
Over Control From Highly Automated Vehicles in Complex Trac Situations:
The Role of Trac Density. Human Factors 58, 4 (2016), 642–652. https:
//doi.org/10.1177/0018720816634226 PMID: 26984515.
[44]
Uwe Gruenefeld, Sebastian Weiß, Andreas Löcken, Isabella Virgilio, Andrew L.
Kun, and Susanne Boll. 2019. VRoad: Gesture-Based Interaction between Pedes-
trians and Automated Vehicles in Virtual Reality. In Proceedings of the 11th
International Conference on Automotive User Interfaces and Interactive Vehic-
ular Applications: Adjunct Proceedings (Utrecht, Netherlands) (AutomotiveUI
’19). Association for Computing Machinery, New York, NY, USA, 399–404.
https://doi.org/10.1145/3349263.3351511
[45]
Renate Haeuslschmid, Laura Schnurr, Julie Wagner, and Andreas Butz. 2015.
Contact-Analog Warnings on Windshield Displays Promote Monitoring the
Road Scene. In Proceedings of the 7th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (Nottingham, United Kingdom)
(AutomotiveUI ’15). Association for Computing Machinery, New York, NY, USA,
64–71. https://doi.org/10.1145/2799250.2799274
[46]
Jonna Häkkilä, Ashley Colley, and Juho Rantakari. 2014. Exploring Mixed
Reality Window Concept for Car Passengers. In Adjunct Proceedings of the 6th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Seattle, WA, USA) (AutomotiveUI ’14). Association for Computing
Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/2667239.2667288
[47]
Kyle Harrington, David R. Large, Gary Burnett, and Orestis Georgiou. 2018.
Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User
Interfaces. In Proceedings of the 10th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications (Toronto, ON, Canada)
(AutomotiveUI ’18). Association for Computing Machinery, New York, NY, USA,
11–20. https://doi.org/10.1145/3239060.3239089
[48]
M. Hartmann, M. Viehweger, W. Desmet, M. Stolz, and D. Watzenig. 2017.
“Pedestrian in the loop”: An approach using virtual reality. In 2017 XXVI Interna-
tional Conference on Information, Communication and Automation Technologies
(ICAT). 1–8.
[49]
Renate Häuslschmid, Max von Buelow, Bastian Peging, and Andreas Butz. 2017.
Supportingtrust in autonomous driving. In Proceedings of the 22nd international
conference on intelligent user interfaces. ACM, 319–329.
[50]
Renate Häuslschmid, Max von Bülow, Bastian Peging, and Andreas Butz.
2017. SupportingTrust in Autonomous Driving. In Proceedings of the 22nd
International Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI
’17). Association for Computing Machinery, New York, NY, USA, 319–329. https:
//doi.org/10.1145/3025171.3025198
[51]
Lawrence J. Hettinger and Gary E. Riccio. 1992. Visually Induced Motion Sick-
ness in Virtual Environments. Presence: Teleoperators and Virtual Environments
1, 3 (1992), 306–310. https://doi.org/10.1162/pres.1992.1.3.306
[52]
Philipp Hock, Johannes Kraus, Franziska Babel, Marcel Walch, Enrico Rukzio,
and Martin Baumann. 2018. How to Design Valid Simulator Studies for Investi-
gating User Experience in Automated Driving: Review and Hands-On Consid-
erations. In Proceedings of the 10th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications. 105–117.
[53]
Philipp Hock, Johannes Kraus, Marcel Walch, Nina Lang, and Martin Baumann.
2016. Elaborating Feedback Strategies for Maintaining Automation in Highly
Automated Driving. In Proceedings of the 8th International Conference on Au-
tomotive User Interfaces and Interactive Vehicular Applications (Ann Arbor, MI,
USA) (Automotive’UI 16). Association for Computing Machinery, New York, NY,
USA, 105–112. https://doi.org/10.1145/3003715.3005414
[54]
Kai Holländer, Philipp Wintersberger, and Andreas Butz. 2019. Overtrust
in External Cues of Automated Vehicles: An Experimental Investigation. In
Proceedings of the 11th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (Utrecht, Netherlands) (AutomotiveUI
’19). Association for Computing Machinery, New York, NY, USA, 211–221.
https://doi.org/10.1145/3342197.3344528
[55]
Renate Häuslschmid, Donghao Ren, Florian Alt, Andreas Butz, and Tobias
Höllerer. 2019. Personalizing Content Presentation on Large 3D Head-Up
Displays. PRESENCE: Virtual and Augmented Reality 27, 1 (2019), 80–106. https:
//doi.org/10.1162/pres_a_00315
[56]
Quinate Chioma Ihemedu-Steinke, Rainer Erbach, Prashanth Halady, Gerrit
Meixner, and Michael Weber. 2017. Virtual reality driving simulator based on
head-mounted displays. In Automotive user interfaces. Springer, 401–428.
[57]
Quinate Chioma Ihemedu-Steinke, Prashanth Halady, Gerrit Meixner, and
Michael Weber. 2018. VR Evaluation of Motion Sickness Solution in Auto-
mated Driving. In Virtual, Augmented and Mixed Reality: Interaction, Navigation,
Visualization, Embodiment, and Simulation, Jessie Y.C. Chen and Gino Fragomeni
(Eds.). Springer International Publishing, Cham, 112–125.
[58]
Hillary Page Ive, Wendy Ju, and Kirstin Kohler. 2014. Quantitative Measures of
User Experience in Autonomous Driving Simulators. In Adjunct Proceedings of
the 6th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Seattle, WA, USA) (AutomotiveUI ’14). Association for
Computing Machinery, New York, NY, USA, 1–3. https://doi.org/10.1145/
2667239.2667312
[59]
Christian P. Janssen, Andrew L. Kun, Stephen Brewster, Linda Ng Boyle, Dun-
can P. Brumby, and LewisL. Chuang. 2019. Exploring the Concept of the (Future)
Mobile Oce. In Proceedings of the 11th International Conference on Automo-
tive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings
(Utrecht, Netherlands) (AutomotiveUI ’19). Association for Computing Machin-
ery, New York, NY, USA, 465–467. https://doi.org/10.1145/3349263.3349600
[60]
Jinki Jung, Hyeopwoo Lee, Jeehye Choi, Abhilasha Nanda, Uwe Gruenefeld, Tim
Stratmann, and Wilko Heuten. 2018. Ensuring safety in augmented reality from
trade-o between immersion and situation awareness. In 2018 IEEE International
Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 70–79.
[61]
Andreas Kasprzok and J. David Smith. 2014. Dash Designer: A Multi-Modal De-
sign Tool for Vehicle Interactivity. In Adjunct Proceedings of the 6th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(Seattle, WA, USA) (AutomotiveUI ’14). Association for Computing Machinery,
New York, NY, USA, 1–6. https://doi.org/10.1145/2667239.2667278
[62]
Kellie D Kennedy and James P Bliss. 2013. Inattentional blindness in a simulated
driving task. In proceedings of the human factors and ergonomics society annual
meeting, Vol. 57. SAGE Publications Sage CA: Los Angeles, CA, 1899–1903.
[63]
Robert S. Kennedy, Norman E. Lane, KevinS. Berbaum, and Michael G. Lilienthal.
1993. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying
Simulator Sickness. The International Journal of Aviation Psychology 3, 3 (1993),
203–220. https://doi.org/10.1207/s15327108ijap0303_3
[64]
Kathrin Knutzen, Florian Weidner, and Wolfgang Broll. 2019. Talk to Me!
Exploring Stereoscopic 3D Anthropomorphic Virtual Assistants in Automated
Vehicles. In Proceedings of the 11th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications: Adjunct Proceedings (Utrecht,
Netherlands) (AutomotiveUI ’19). Association for Computing Machinery, New
York, NY, USA, 363–368. https://doi.org/10.1145/3349263.3351503
128
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
[65]
A. Koilias, C. Mousas, B. Rekabdar, and C. Anagnostopoulos. 2019. Passen-
ger Anxiety when Seated in a Virtual Reality Self-Driving Car. In 2019 IEEE
Conference on Virtual Reality and 3D User Interfaces (VR). 1024–1025.
[66]
Sven Krome, David Goedicke, Thomas J. Matarazzo, Zimeng Zhu, Zhenwei
Zhang, J. D. Zamrescu-Pereira, and Wendy Ju. 2019. How People Experience
Autonomous Intersections: Taking a First-Person Perspective. In Proceedings of
the 11th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Utrecht, Netherlands) (AutomotiveUI ’19). Association
for Computing Machinery, New York, NY, USA, 275–283. https://doi.org/10.
1145/3342197.3344520
[67]
Andrew L. Kun. 2018. Human-Machine Interaction for Vehicles: Review and
Outlook. Foundations and Trends
®
in Human–Computer Interaction 11, 4 (2018),
201–293.
[68]
Andrew L Kun, Susanne Boll, and Albrecht Schmidt. 2016. Shifting gears: User
interfaces in the age of autonomous driving. IEEE Pervasive Computing 15, 1
(2016), 32–38.
[69]
Andrew L Kun, Hidde van der Meulen, and Christian P Janssen. 2017. Calling
while driving: An initial experiment with HoloLens. (2017).
[70]
Alexander Kunze, Stephen J. Summerskill, Russell Marshall, and Ashleigh J. Filt-
ness. 2017. Enhancing Driving Safety and User Experience Through Unobtrusive
and Function-Specic Feedback. In Proceedings of the 9th International Confer-
ence on Automotive User Interfaces and Interactive Vehicular Applications Adjunct
(Oldenburg, Germany) (AutomotiveUI ’17). Association for Computing Machin-
ery, New York, NY, USA, 183–189. https://doi.org/10.1145/3131726.3131762
[71]
Alexander Kunze, Stephen J. Summerskill, Russell Marshall, and Ashleigh J.
Filtness. 2018. Augmented Reality Displays for Communicating Uncertainty
Information in Automated Driving. In Proceedings of the 10th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(Toronto, ON, Canada) (AutomotiveUI ’18). Association for Computing Machin-
ery, New York, NY, USA, 164–175. https://doi.org/10.1145/3239060.3239074
[72]
Ramesh Lagoo, Vassilis Charissis, and David K Harrison. 2019. Mitigating
Driver’s Distraction: Automotive Head-Up Display and Gesture Recognition
System. IEEE Consumer Electronics Magazine 8, 5 (2019), 79–85.
[73]
Sabine Langlois and Boussaad Soualmi. 2016. Augmented reality versus classical
HUD to take over from automated driving: An aid to smooth reactions and to
anticipate maneuvers. In 2016 IEEE 19th International Conference on Intelligent
Transportation Systems (ITSC). IEEE, 1571–1578.
[74]
David R. Large, Gary Burnett, Ben Anyasodo, and Lee Skrypchuk. 2016. As-
sessing Cognitive Demand during Natural Language Interactions with a Digital
Driving Assistant. In Proceedings of the 8th International Conference on Automo-
tive User Interfaces and Interactive Vehicular Applications (Ann Arbor, MI, USA)
(Automotive’UI 16). Association for Computing Machinery, New York, NY, USA,
67–74. https://doi.org/10.1145/3003715.3005408
[75]
Felix Lauber, Claudius Böttcher, and Andreas Butz. 2014. PapAR: Paper Proto-
typing for Augmented Reality. In Adjunct Proceedings of the 6th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(Seattle, WA, USA) (AutomotiveUI ’14). Association for Computing Machinery,
New York, NY, USA, 1–6. https://doi.org/10.1145/2667239.2667271
[76]
Felix Lauber, Claudius Böttcher, and Andreas Butz. 2014. You’ve Got the Look:
Visualizing Infotainment Shortcuts in Head-Mounted Displays. In Proceedings
of the 6th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Seattle, WA, USA) (AutomotiveUI ’14). Association for
Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/
2667317.2667408
[77]
Hieu Lê, Tuan Long Pham, and Gerrit Meixner. 2017. A Concept For A Vir-
tual Reality Driving Simulation In Combination With A Real Car. In Pro-
ceedings of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications Adjunct (Oldenburg, Germany) (Automo-
tiveUI ’17). Association for Computing Machinery, New York, NY, USA, 77–82.
https://doi.org/10.1145/3131726.3131742
[78]
Yee Mun Lee, Ruth Madigan, Jorge Garcia, Andrew Tomlinson, Albert Sol-
ernou, Richard Romano, Gustav Markkula, Natasha Merat, and Jim Uttley.
2019. Understanding the Messages Conveyed by Automated Vehicles. In Pro-
ceedings of the 11th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (Utrecht, Netherlands) (AutomotiveUI
’19). Association for Computing Machinery, New York, NY, USA, 134–143.
https://doi.org/10.1145/3342197.3344546
[79]
Yeti Li, Murat Dikmen, Thana G. Hussein, Yahui Wang, and Catherine Burns.
2018. To Cross or Not to Cross: Urgency-Based External Warning Displays on
Autonomous Vehicles to Improve Pedestrian Crossing Safety. In Proceedings of
the 10th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18). Association
for Computing Machinery, New York, NY, USA, 188–197. https://doi.org/10.
1145/3239060.3239082
[80]
Patrick Lindemann, Niklas Müller, and Gerhard Rigolll. 2019. Exploring the
Use of Augmented Reality Interfaces for Driver Assistance in Short-Notice
Takeovers. In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 804–809.
[81]
Patrick Lindemann and Gerhard Rigoll. 2017. Examining the Impact of See-
Through Cockpits on Driving Performance in a Mixed Reality Prototype. In
Proceedings of the 9th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications Adjunct (Oldenburg, Germany) (AutomotiveUI
’17). Association for Computing Machinery, New York, NY, USA, 83–87. https:
//doi.org/10.1145/3131726.3131754
[82]
Lee Lisle, Kyle Tanous, Hyungil Kim, Joseph L. Gabbard, and Doug A. Bowman.
2018. Eect of Volumetric Displays on Depth Perception in Augmented Reality.
In Proceedings of the 10th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18).
Association for Computing Machinery, New York, NY, USA, 155–163. https:
//doi.org/10.1145/3239060.3239083
[83]
Andreas Löcken, Carmen Golling, and Andreas Riener. 2019. How Should Auto-
mated VehiclesInteract with Pedestrians? A Comparative Analysis of Interaction
Concepts in Virtual Reality. In Proceedings of the 11th International Conference
on Automotive User Interfaces and Interactive Vehicular Applications (Utrecht,
Netherlands) (AutomotiveUI ’19). Association for Computing Machinery, New
York, NY, USA, 262–274. https://doi.org/10.1145/3342197.3344544
[84]
Mike Long, Gary Burnett, Robert Hardy, and Harriet Allen. 2015. Depth Dis-
crimination between Augmented Reality and Real-World Targets for Vehicle
Head-up Displays. In Proceedings of the 7th International Conference on Automo-
tive User Interfaces and Interactive Vehicular Applications (Nottingham, United
Kingdom) (AutomotiveUI ’15). Association for Computing Machinery, New York,
NY, USA, 72–79. https://doi.org/10.1145/2799250.2799292
[85]
Pietro Lungaro, Konrad Tollmar, and Thomas Beelen. 2017. Human-to-AI
Interfaces for Enabling Future Onboard Experiences. In Proceedings of the
9th International Conference on Automotive User Interfaces and Interactive Ve-
hicular Applications Adjunct (Oldenburg, Germany) (AutomotiveUI ’17). As-
sociation for Computing Machinery, New York, NY, USA, 94–98. https:
//doi.org/10.1145/3131726.3131737
[86]
Pietro Lungaro, Konrad Tollmar, Firdose Saeik, Conrado Mateu Gisbert, and
Gaël Dubus. 2018. DriverSense: A Hyper-Realistic Testbed for the Design
and Evaluation of Novel User Interfaces in Self-Driving Vehicles. In Adjunct
Proceedings of the 10th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18).
Association for Computing Machinery, New York, NY, USA, 127–131. https:
//doi.org/10.1145/3239092.3265955
[87]
Mark McGill and Stephen Brewster. 2019. Virtual Reality Passenger Experiences.
In Proceedings of the 11th International Conference on Automotive User Interfaces
and Interactive VehicularApplications: Adjunct Proceedings (Utrecht, Netherlands)
(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
434–441. https://doi.org/10.1145/3349263.3351330
[88]
Mark McGill, Alexander Ng, and Stephen Brewster. 2017. I am the passenger:
how visual motion cues can inuence sickness for in-car VR. In Proceedings of
the 2017 chi conference on human factors in computing systems. 5655–5668.
[89]
Vivien Melcher, Stefan Rauh, Frederik Diederichs, Harald Widlroither, and
Wilhelm Bauer. 2015. Take-Over Requests for Automated Driving. Procedia
Manufacturing 3 (2015), 2867 – 2873. https://doi.org/10.1016/j.promfg.2015.07.
788 6th International Conference on Applied Human Factors and Ergonomics
(AHFE 2015) and the Aliated Conferences, AHFE 2015.
[90]
Coleman Merenda, Hyungil Kim, Joseph L. Gabbard, Samantha Leong, David R.
Large, and Gary Burnett. 2017. Did You See Me? Assessing Perceptual vs. Real
Driving Gains Across Multi-Modal Pedestrian Alert Systems. In Proceedings of
the 9th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Oldenburg, Germany) (AutomotiveUI ’17). Association
for Computing Machinery, New York, NY, USA, 40–49. https://doi.org/10.1145/
3122986.3123013
[91]
Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. 1995. Aug-
mented reality: A class of displays on the reality-virtuality continuum. In Tele-
manipulator and telepresence technologies, Vol. 2351. International Society for
Optics and Photonics, 282–292.
[92]
David Miller, Mishel Johns, Brian Mok, Nikhil Gowda, David Sirkin, Key Lee,
and Wendy Ju. 2016. Behavioral measurement of trust in automation: the trust
fall. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
Vol. 60. SAGE Publications Sage CA: Los Angeles, CA, 1849–1853.
[93]
Pranav Mistry, Tsuyoshi Kuroki, and Chaochi Chang. 2008. TaPuMa: Tangible
Public Map for Information Acquirement through the Things We Carry. In
Proceedings of the 1st International Conference on Ambient Media and Systems
(Quebec, Canada) (Ambi-Sys ’08). ICST (Institute for Computer Sciences, Social-
Informatics and Telecommunications Engineering), Brussels, BEL, Article 12,
5 pages.
[94]
Brian Mok and Wendy Ju. 2014. The Push vs Pull of Information between
Autonomous Cars and Human Drivers. In Adjunct Proceedings of the 6th In-
ternational Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Seattle, WA, USA) (AutomotiveUI ’14). Association for Computing
Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/2667239.2667287
[95]
Mohammad Mehdi Moniri and Christian Müller. 2014. EyeVIUS: Intelligent Ve-
hicles in Intelligent Urban Spaces. In Adjunct Proceedings of the 6th International
129
MUM 2020, November 22–25, 2020, Essen, Germany Riegler et al.
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(Seattle, WA, USA) (AutomotiveUI ’14). Association for Computing Machinery,
New York, NY, USA, 1–6. https://doi.org/10.1145/2667239.2667265
[96]
Dylan Moore, Rebecca Currano, David Sirkin, Azra Habibovic, Victor Malmsten
Lundgren, Debargha (Dave) Dey, and Kai Holländer. 2019. Wizards of WoZ:
Using Controlled and Field Studies to Evaluate AV-Pedestrian Interactions. In
Proceedings of the 11th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications: Adjunct Proceedings (Utrecht, Netherlands)
(AutomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
45–49. https://doi.org/10.1145/3349263.3350756
[97]
Atsuo Murata. 2004. Eects of Duration of Immersion in a Virtual Reality
Environment on Postural Stability. International Journal of Human–Computer
Interaction 17, 4 (2004), 463–477. https://doi.org/10.1207/s15327590ijhc1704_2
[98]
Stefan Neumeier, Philipp Wintersberger, Anna-Katharina Frison, Armin Becher,
Christian Facchi, and Andreas Riener. 2019. Teleoperation: The Holy Grail to
Solve Problems of Automated Driving? Sure, but Latency Matters. In Proceedings
of the 11th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Utrecht, Netherlands) (AutomotiveUI ’19). Association
for Computing Machinery, New York, NY, USA, 186–197. https://doi.org/10.
1145/3342197.3344534
[99]
Trung Thanh Nguyen, Kai Holländer, Marius Hoggenmueller, Callum Parker,
and Martin Tomitsch. 2019. Designing for Projection-Based Communication be-
tween Autonomous Vehicles and Pedestrians. In Proceedings of the 11th Interna-
tional Conference on Automotive User Interfaces and Interactive Vehicular Applica-
tions (Utrecht, Netherlands) (AutomotiveUI ’19). Association for Computing Ma-
chinery, New York, NY, USA, 284–294. https://doi.org/10.1145/3342197.3344543
[100]
Don Norman. 2013. The design of everyday things: Revised and expanded edition.
Basic books.
[101]
Byoung-Jun Park, Changrak Yoon, Jeong-Woo Lee, and Kyong-Ho Kim. 2015.
Augmented reality based on driving situation awareness in vehicle. In 2015 17th
International Conference on Advanced Communication Technology (ICACT). IEEE,
593–595.
[102]
Ingrid Pettersson, Florian Lachner, Anna-Katharina Frison, Andreas Riener, and
Andreas Butz. 2018. A Bermuda Triangle? A Review of Method Application
and Triangulation in User Experience Evaluation. In Proceedings of the 2018
CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada)
(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–16.
https://doi.org/10.1145/3173574.3174035
[103]
Bastian Peging, Michael Kienast, Albrecht Schmidt, Tanja Döring, et al
.
2011.
Speet: A multimodal interaction style combining speech and touch interaction
in automotive environments. In Adjunct Proceedings of the 3rd International
Conference on Automotive User Interfaces and Interactive Vehicular Applications,
AutomotiveUI, Vol. 11.
[104]
Bastian Peging, Maurice Rang, and Nora Broy. 2016. Investigating User
Needs for Non-Driving-Related Activities during Automated Driving. In Proceed-
ings of the 15th International Conference on Mobile and Ubiquitous Multimedia
(Rovaniemi, Finland) (MUM ’16). Association for Computing Machinery, New
York, NY, USA, 91–99. https://doi.org/10.1145/3012709.3012735
[105]
Matthew J. Pitts, Lee Skrypchuk, Alex Attridge, and Mark A. Williams. 2014.
Comparing the User Experience of Touchscreen Technologies in an Automotive
Application. In Proceedings of the 6th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications (Seattle, WA, USA) (Auto-
motiveUI ’14). Association for Computing Machinery, New York, NY, USA, 1–8.
https://doi.org/10.1145/2667317.2667418
[106]
Diana Reich and Rainer Stark. 2014. “To See or Not to See?” How Do
Eye Movements Change Within Immersive Driving Environments. In Ad-
junct Proceedings of the 6th International Conference on Automotive User In-
terfaces and Interactive Vehicular Applications (Seattle, WA, USA) (Automo-
tiveUI ’14). Association for Computing Machinery, New York, NY, USA, 1–6.
https://doi.org/10.1145/2667239.2667268
[107]
Andreas Riegler, Bilal Aksoy, Andreas Riener, and Clemens Holzmann. 2020.
Gaze-based Interaction with Windshield Displays for Automated Driving: Im-
pact of Dwell Time and Feedback Design on Task Performance and Subjective
Workload. In 12th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications. 151–160.
[108]
Andreas Riegler, Andrew L Kun, Stephen Brewster, Andreas Riener, Joe Gabbard,
and Carolin Wienrich. 2019. MRV 2019: 3rd workshop on mixed reality for
intelligent vehicles. In Proceedings of the 11th International Conference on Auto-
motive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings.
38–44.
[109]
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2019. Adaptive Dark
Mode: Investigating Text and Transparency of Windshield Display Content
for Automated Driving. In Mensch und Computer 2019 (Hamburg, Germany). 8.
https://doi.org/10.18420/muc2019-ws- 612
[110]
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2019. AutoWSD:
Virtual Reality Automated Driving Simulator for Rapid HCI Prototyping. In
Mensch und Computer 2019 (Hamburg, Germany). ACM, New York, NY, USA, 5.
https://doi.org/10.1145/3340764.3345366
[111]
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2019. Towards Dy-
namic Positioning of Text Content on a Windshield Display for Automated
Driving. In 25th ACM Symposium on Virtual Reality Software and Technology
(Parramatta, NSW, Australia) (VRST ’19). Association for Computing Machinery,
New York,N Y,USA, Article 94, 2 pages. https://doi.org/10.1145/3359996.3364757
[112]
Andreas Riegler, Andreas Riener, and Clemens Holzmann. 2019. Virtual Reality
Driving Simulator for User Studies on Automated Driving. In Proceedings of
the 11th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications: Adjunct Proceedings (Utrecht, Netherlands) (AutomotiveUI
’19). Association for Computing Machinery, New York, NY, USA, 502–507. https:
//doi.org/10.1145/3349263.3349595
[113]
Andreas Riegler,P hilipp Wintersberger,Andreas Riener, and Clemens Holzmann.
2018. Investigating User Preferences for Windshield Displays in Automated
Vehicles. In Proceedings of the 7th ACM International Symposium on Pervasive
Displays (Munich, Germany) (PerDis ’18). ACM, New York, NY, USA, Article 8,
7 pages. https://doi.org/10.1145/3205873.3205885
[114]
Andreas Riegler,P hilipp Wintersberger,Andreas Riener, and Clemens Holzmann.
2019. Augmented Reality Windshield Displays and Their Potential to Enhance
User Experience in Automated Driving. i-com 18, 2 (2019), 127–149. https:
//doi.org/10.1515/icom-2018- 0033
[115]
Andreas Riener, Myounghoon Jeon, Ignacio Alvarez, and Anna K. Frison. 2017.
Driver in the Loop: Best Practices in Automotive Sensing and Feedback Mechanisms.
Springer International Publishing, Cham, 295–323. https://doi.org/10.1007/978-
3-319- 49448-7_11
[116]
Florian Roider and Tom Gross. 2018. I See Your Point: Integrating Gaze to
Enhance Pointing Gesture Accuracy While Driving. In Proceedings of the 10th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Toronto, ON, Canada) (AutomotiveUI ’18). Association for Comput-
ing Machinery, New York, NY, USA, 351–358. https://doi.org/10.1145/3239060.
3239084
[117]
Taishi Sawabe, Masayuki Kanbara, and Norihiro Hagita. 2016. Diminished
reality for acceleration—Motion sickness reduction with vection for autonomous
driving. In 2016 IEEE International Symposium on Mixed and Augmented Reality
(ISMAR-Adjunct). IEEE, 297–299.
[118]
Taishi Sawabe, Masayuki Kanbara, and Norihiro Hagita. 2017. Diminished
reality for acceleration stimulus: Motion sickness reduction with vection for
autonomous driving. In 2017 IEEE Virtual Reality (VR). IEEE, 277–278.
[119]
Mark C Schall Jr, Michelle L Rusch, John D Lee, Jerey D Dawson, Geb Thomas,
Nazan Aksan, and Matthew Rizzo. 2013. Augmented reality cues and elderly
driver hazard perception. Human factors 55, 3 (2013), 643–658.
[120]
Clemens Schartmüller, Klemens Weigl, Philipp Wintersberger, Andreas Riener,
and Marco Steinhauser. 2019. Text Comprehension: Heads-Up vs. Auditory
Displays: Implications for a Productive Work Environment in SAE Level 3 Auto-
mated Vehicles. In Proceedings of the 11th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications (Utrecht, Netherlands) (Au-
tomotiveUI ’19). Association for Computing Machinery, New York, NY, USA,
342–354. https://doi.org/10.1145/3342197.3344547
[121]
Matthias Schneider, Anna Bruder, Marc Necker, Tim Schluesener, Niels Henze,
and Christian Wol. 2019. A Field Study to Collect Expert Knowledge for
the Development of AR HUD Navigation Concepts. In Proceedings of the 11th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications: Adjunct Proceedings (Utrecht, Netherlands) (AutomotiveUI ’19).
Association for Computing Machinery, New York, NY, USA, 358–362. https:
//doi.org/10.1145/3349263.3351339
[122]
Sonja Schneider, Madeleine Ratter, and Klaus Bengler. 2019. Pedestrian behavior
in virtual reality: eects of gamication and distraction. In Proceedings of the
Road Safety and Simulation Conference.
[123]
Nadja Schömig, Katharina Wiedemann, Frederik Naujoks, Alexandra Neukum,
Bettina Leuchtenberg, and Thomas Vöhringer-Kuhnt. 2018. An Augmented
Reality Display for Conditionally Automated Driving. In Adjunct Proceedings of
the 10th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18). Association
for Computing Machinery, New York, NY, USA, 137–141. https://doi.org/10.
1145/3239092.3265956
[124]
Ronald Schroeter and Michael A. Gerber. 2018. A Low-Cost VR-Based Automated
Driving Simulator for Rapid Automotive UI Prototyping. In Adjunct Proceedings
of the 10th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18). Association
for Computing Machinery, New York, NY, USA, 248–251. https://doi.org/10.
1145/3239092.3267418
[125]
Ronald Schroeter, Jim Oxtoby, and Daniel Johnson. 2014. AR and Gamication
Concepts to Reduce Driver Boredom and Risk Taking Behaviours. In Proceedings
of the 6th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Seattle, WA, USA) (AutomotiveUI ’14). Association for
Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/
2667317.2667415
[126]
Ronald Schroeter and Fabius Steinberger. 2016. Pokémon DRI VE: Towards
Increased Situational Awareness in Semi-Automated Driving. In Proceedings
130
A Research Agenda for Mixed Reality in Automated Vehicles MUM 2020, November 22–25, 2020, Essen, Germany
of the 28th Australian Conference on Computer-Human Interaction (Launceston,
Tasmania, Australia) (OzCHI ’16). Association for Computing Machinery, New
York, NY, USA, 25–29. https://doi.org/10.1145/3010915.3010973
[127]
Valentin Schwind, Pascal Knierim, Nico Haas, and Niels Henze. 2019. Using pres-
ence questionnaires in virtual reality. In Proceedings of the 2019 CHI Conference
on Human Factors in Computing Systems. 1–12.
[128]
Bobbie D. Seppelt and John D. Lee. 2019. Keeping the driver in the loop: Dynamic
feedback to support appropriate use of imperfect vehicle control automation.
International Journal of Human-Computer Studies 125 (2019), 66 – 80. https:
//doi.org/10.1016/j.ijhcs.2018.12.009
[129]
Shervin Shahrdar, Corey Park, and Mehrdad Nojoumian. 2019. Human Trust
Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simula-
tor. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
(Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New
York, NY, USA, 515–520. https://doi.org/10.1145/3306618.3314264
[130]
S Tarek Shahriar and Andrew L Kun. 2018. Camera-View Augmented Reality:
Overlaying Navigation Instructions on a Real-Time View of the Road. In Pro-
ceedings of the 10th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications. ACM, 146–154.
[131]
Ben Shneiderman and Catherine Plaisant. 2010. Designing the user interface:
strategies for eective human-computer interaction. Pearson Education India.
[132]
Bryant Walker Smith. 2017. Automated driving and product liability. Mich. St.
L. Rev. (2017), 1.
[133]
Bryant Walker Smith. 2017. How governments can promote automated driving.
NML Rev. 47 (2017), 99.
[134]
Missie Smith, Jillian Streeter, Gary Burnett, and Joseph L. Gabbard. 2015. Visual
Search Tasks: The Eects of Head-up Displays on Driving and Task Perfor-
mance. In Proceedings of the 7th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (Nottingham, United Kingdom)
(AutomotiveUI ’15). Association for Computing Machinery, New York, NY, USA,
80–87. https://doi.org/10.1145/2799250.2799291
[135]
Ye Eun Song, Christian Lehsing, Tanja Fuest, and Klaus Bengler. 2018. External
HMIs and their eect on the interaction between pedestrians and automated
vehicles. In International Conference on Intelligent Human Systems Integration.
Springer, 13–18.
[136]
Alessandro Soro, Andry Rakotonirainy, Ronald Schroeter, and Sabine Woll-
städter. 2014. Using Augmented Video to Test In-Car User Experiences of
Context Analog HUDs. In Adjunct Proceedings of the 6th International Confer-
ence on Automotive User Interfaces and Interactive Vehicular Applications (Seattle,
WA, USA) (AutomotiveUI ’14). Association for Computing Machinery, New York,
NY, USA, 1–6. https://doi.org/10.1145/2667239.2667302
[137]
D. Sportillo, A. Paljic, and L. Ojeda. 2019. On-Road Evaluation of Autonomous
Driving Training. In 2019 14th ACM/IEEE International Conference on Human-
Robot Interaction (HRI). 182–190.
[138]
Fabius Steinberger, Ronald Schroeter, Verena Lindner, Zachary Fitz-Walter,
Joshua Hall, and Daniel Johnson. 2015. Zombies on the Road: A Holistic Design
Approach to Balancing Gamication and Safe Driving. In Proceedings of the 7th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Nottingham, United Kingdom) (AutomotiveUI ’15). Association for
Computing Machinery, New York, NY, USA, 320–327. https://doi.org/10.1145/
2799250.2799260
[139] Jonathan Steuer. 1992. Dening virtual reality: Dimensions determining telep-
resence. Journal of communication 42, 4 (1992), 73–93.
[140]
Bethan Hannah Topliss, Sanna M. Pampel, Gary Burnett, and Joseph L. Gabbard.
2019. Evaluating Head-Up Displays across Windshield Locations. In Proceedings
of the 11th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Utrecht, Netherlands) (AutomotiveUI ’19). Association
for Computing Machinery, New York, NY, USA, 244–253. https://doi.org/10.
1145/3342197.3344524
[141]
Bethan Hannah Topliss, Sanna M Pampel, Gary Burnett, Lee Skrypchuk, and
Chrisminder Hare. 2018. Establishing the Role of a Virtual Lead Vehicle as a
Novel Augmented Reality Navigational Aid. In Proceedings of the 10th Interna-
tional Conference on Automotive User Interfaces and Interactive Vehicular Applica-
tions (Toronto, ON, Canada) (AutomotiveUI ’18). Association for Computing Ma-
chinery, New York, NY, USA, 137–145. https://doi.org/10.1145/3239060.3239069
[142]
Akira Utsumi, Tsukasa Mikuni, and Isamu Nagasawa. 2019. Eect of On-Road
Virtual Visual References on Vehicle Control Stability of Wide/Narrow FOV
Drivers. In Proceedings of the 11th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications: Adjunct Proceedings (Utrecht,
Netherlands) (AutomotiveUI ’19). Association for Computing Machinery, New
York, NY, USA, 297–301. https://doi.org/10.1145/3349263.3351331
[143]
Emma van Amersfoorth, Lotte Roefs, Quinta Bonekamp, Laurent Schuermans,
and Bastian Peging. 2019. Increasing Driver Awareness through Translucency
on Windshield Displays. In Proceedings of the 11th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications: AdjunctProce ed-
ings (Utrecht, Netherlands) (AutomotiveUI ’19). Association for Computing Ma-
chinery, New York, NY, USA, 156–160. https://doi.org/10.1145/3349263.3351911
[144]
Matej Vengust, Boštjan Kaluža, Kristina Stojmenova, and Jaka Sodnik. 2017.
NERVteh Compact Motion Based Driving Simulator. In Proceedings of the 9th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications Adjunct (Oldenburg, Germany) (AutomotiveUI ’17). Association for
Computing Machinery, New York, NY, USA, 242–243. https://doi.org/10.1145/
3131726.3132047
[145]
Gabriela Villalobos-Zúñiga, Tuomo Kujala, and Antti Oulasvirta. 2016. T9+HUD:
Physical Keypad and HUD Can Improve Driving Performance While Typing
and Driving. In Proceedings of the 8th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications (Ann Arbor, MI, USA)
(Automotive’UI 16). Association for Computing Machinery, New York, NY, USA,
177–184. https://doi.org/10.1145/3003715.3005453
[146]
Tamara von Sawitzky, Philipp Wintersberger, Andreas Riener, and Joseph L.
Gabbard. 2019. Increasing Trust in Fully Automated Driving: Route Indication
on an Augmented Reality Head-up Display. In Proceedings of the 8th ACM
International Symposium on Pervasive Displays (Palermo, Italy) (PerDis ’19). ACM,
New York, NY, USA, Article 6, 7 pages. https://doi.org/10.1145/3321335.3324947
[147]
Alla Vovk, Fridolin Wild, Will Guest, and Timo Kuula. 2018. Simulator sickness
in augmented reality training using the Microsoft HoloLens. In Proceedings of
the 2018 CHI Conference on Human Factors in Computing Systems. 1–9.
[148]
Chao Wang, Jacques Terken, and Jun Hu. 2014. “Liking” Other Drivers’ Behavior
While Driving. In Adjunct Proceedings of the 6th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications (Seattle, WA,
USA) (AutomotiveUI ’14). Association for Computing Machinery, New York, NY,
USA, 1–6. https://doi.org/10.1145/2667239.2667424
[149]
Gesa Wiegand, Christian Mai, Kai Holländer, and Heinrich Hussmann. 2019.
InCarAR: A Design Space Towards 3D Augmented Reality Applications in Ve-
hicles. In Proceedings of the 11th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications (Utrecht, Netherlands) (Automo-
tiveUI ’19). Association for Computing Machinery, New York, NY, USA, 1–13.
https://doi.org/10.1145/3342197.3344539
[150]
Gesa Wiegand, Christian Mai, Yuanting Liu, and Heinrich Hußmann. 2018.
Early Take-Over Preparation in Stereoscopic 3D. In Adjunct Proceedings of
the 10th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications (Toronto, ON, Canada) (AutomotiveUI ’18). Association
for Computing Machinery, New York, NY, USA, 142–146. https://doi.org/10.
1145/3239092.3265957
[151]
Carolin Wienrich and Kristina Schindler. 2019. Challenges and Requirements
of immersive media in autonomous car: Exploring the feasibility of virtual
entertainment applications. i-com 15, 1 (2019), 32–38.
[152]
Philipp Wintersberger, Anna-Katharina Frison, Andreas Riener, and Tamara von
Sawitzky. 2019. Fostering User Acceptance and Trust in Fully Automated
Vehicles: Evaluating the Potential of Augmented Reality. PRESENCE: Virtual and
Augmented Reality 27, 1 (2019), 46–62. https://doi.org/10.1162/pres_a_00320
[153]
Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison, and An-
dreas Riener. 2017. Trac Augmentation as a Means to Increase Trust in
Automated Driving Systems. In Proceedings of the 12th Biannual Conference on
Italian SIGCHI Chapter. ACM, 17.
131
... Additionally, driver-passenger interactions for work-related collaborations, remote assistance, and shared driving experiences could be explored [14,27]. We need to re-evaluate the challenges of MR in future mobility [21], investigate new ones and discuss specific use cases and evaluation criteria for MR technologies. In particular, the combination of AR and VR technology provides the opportunity for novel user experiences. ...
... We intend to determine the collaborations that will enable our community to make efficient progress in developing MR interfaces for both drivers/passengers of automated vehicles and cyclists. The research roadmap we propose will be structured based on our prior exploration of MR use cases for automated driving, providing a framework for future investigations [21,25]. ...
... This suggests that the attentional cue's design is important. A distracting cue which draws more attention than the danger it is supposed to be cueing will have the opposite effect on driver awareness, which is one of the biggest challenges for realising in-car AR [88]. ...
Conference Paper
Full-text available
Semi-autonomous vehicles allowdrivers to engage with non-driving related tasks (NDRTs). However, these tasks interfere with the driver’s situational awareness, key when they need to safely retake control of the vehicle. This paper investigates if Augmented Reality (AR) could be used to present NDRTs to reduce their impact on situational awareness. Two experiments compared driver performance on a hazard prediction task whilst interacting with an NDRT, presented either as an AR Heads-Up Display or a traditional Heads-Down Display. The results demonstrate that an AR display including a novel dynamic attentional cue improves situational awareness, depending on the workload of the NDRT and design of the cue. The results provide novel insights for designers of incar systems about how to design NDRTs to aid driver situational awareness in future vehicles.
Conference Paper
We introduce a conceptual framework exploring the learning methods for older adults in navigating automated vehicle interfaces. Through semi-structured interviews, we observed distinct approaches to learning, and based on these, offer a novel conceptualization of a ‘mentorable’ interface to enhance technology education. The introduction of automated vehicles (AVs) to transportation has required novel lenses to technology adoption. Although AVs require less demand in cognitive, motor, and sensory acuity, there is an increasing dependence on digital literacy. While technology education has been broadly explored through the lens of learnability, this paradigm does not work well for older adults due to its inherent trial-and-error approach to independent learning. Because older adults rely heavily on additional external support in learning technologies, we present a conceptual framework for ‘mentorability’, where a network of support is emphasized, and mentorship is integrated into the design process for in-vehicle interfaces.
Conference Paper
Full-text available
Dedicated handheld controllers facilitate haptic experiences of virtual objects in mixed reality (MR). However, as mobile MR becomes more prevalent, we observe the emergence of controller-free MR interactions. To retain immersive haptic experiences, we explore the use of mobile devices as a substitute for specialised MR controller. In an exploratory gesture elicitation study (𝑛 = 18), we examined users’ (1) intuitive hand gestures performed with prospective mobile devices and (2) preferences for real-time haptic feedback when exploring haptic object properties. Our results reveal three haptic exploration modes for the mobile device, as an object, hand substitute, or as an additional tool, and emphasise the benefits of incorporating the device’s unique physical features into the object interaction. This work expands the design possibilities using mobile devices for tangible object interaction, guiding the future design of mobile devices for haptic MR experiences.
Article
With the maturation of autonomous driving technology, the use of autonomous vehicles in a socially acceptable manner has become a growing demand of the public. Human-like autonomous driving is expected due to the impact of the differences between autonomous vehicles and human drivers on safety. Although human-like decision-making has become a research hotspot, a unified theory has not yet been formed, and there are significant differences in the implementation and performance of existing methods. This paper provides a comprehensive overview of human-like decision-making for autonomous vehicles. The following issues are discussed: 1) The intelligence level of most autonomous driving decision-making algorithms; 2) The driving datasets and simulation platforms for testing and verifying human-like decision-making; 3) The evaluation metrics of human-likeness; personalized driving; the application of decision- making in real traffic scenarios; and 4) The potential research direction of human-like driving. These research results are significant for creating interpretable human-like driving models and applying them in dynamic traffic scenarios. In the future, the combination of intuitive logical reasoning and hierarchical structure will be an important topic for further research. It is expected to meet the needs of human-like driving.
Conference Paper
Full-text available
Windshield displays are a promising technology for automotive application. In combination with the emergence of highly automated vehicles, chances are that work-related activities will become more popular on the daily commute to and from work. While windshield displays can show content relevant for non-driving related activities, little information is available on how potential users would utilize these displays in terms of text and background color as well as transparency usage. In this paper, we present the results of two user studies (pilot study: N = 10, main study: N = 20) addressing this issue. Findings from quantitative measurements and qualitative pre-/post study surveys and interviews suggest a strong preference for the chat window being located on the driver side presented in dark mode with adaptive background transparency levels based on the luminance of the outside environment.
Conference Paper
Full-text available
With increasing automation, vehicles could soon become mobile work-and living spaces, but traditional user interfaces (UIs) are not designed for this domain. We argue that high levels of productivity and user experience will only be achieved in SAE L3 automated vehicles if UIs are modified for non-driving related tasks. As controls might be far away (up to 2 meters), we suggest to use gaze-based interaction with windshield displays. In this work, we investigate the effect of different dwell times and feedback designs (circular and linear progress indicators) on user preference, task performance and error rates. Results from a user study conducted in a virtual reality driving simulator (N = 24) highlight that circular feedback animations around the viewpoint are preferred for gaze input. We conclude this work by pointing out the potential of gaze-based interactions with windshield displays for future SAE L3 vehicles.
Conference Paper
Full-text available
Windshield displays (WSDs) are a promising new technology to augment the entire windscreen with additional information about vehicle state, highlight critical objects in the surrounding, or serve as replacement for conventional displays. Typically, augmentation is provided in a screen-fixed manner as overlay on the windscreen. However, it is unclear to date if this is optimal in terms of usability/UX. In this work, we propose ”StickyWSD” – a world-fixed positioning strategy – and evaluate its impact on quantitative measures compared to screen-fixed positioning. Results from a user study conducted in a virtual reality driving simulator (N = 23) suggest that the dynamic world-fixed positioning technique shows increased performance and lowered error rates as well as take-over times. We conclude that the ”StickyWSD” approach offers lot of potential for WSDs that should be researched further.
Conference Paper
Full-text available
Nowadays, HCI Research on automated driving is commonly carried out using either low-quality setups with 2D monitors or expensive driving simulators with motion platforms. Furthermore, software for user studies on automated driving is often expensive and hard to modify for different scenarios. We plan to fill this gap by proposing a low-cost, high-fidelity immersive prototyping simulator by making use of virtual reality (VR) technology: AutoWSD - Automated driving simulator for research on windshield displays. We showcase a hybrid software and hardware solution as well as demonstrate how to design and implement scenarios for user studies, and thereby encourage discussion about potential improvements and extensions for AutoWSD, as well as the topic of trust, acceptance, user experience and simulator sickness in automation.
Conference Paper
As a tool for pedestrian research, virtual reality is appreciated for its safety, flexibility, and high experimental control. At the same time, differences between simulator studies and real-world traffic may affect behavioral outcomes. While continuous technological advancements counteract shortcomings such as a limited field of view, effects of motivation and attentional focus tend to be neglected. This appears inappropriate, as recurrent street crossing and similarly repetitive tasks can impair motivation, while distractors common to naturalistic settings are typically absent in scripted scenarios. Focusing on the experimental context, the present study aims to contribute to the methodological evaluation of pedestrian simulators. Thirty-six participants performed repeated virtual street crossings displayed via a head-mounted display. Motivational incentives arising from the use of game elements and moderate cognitive load induced by a secondary task were investigated with regard to crossing behavior, play experience, and the feeling of presence. Gamification thereby affected walking speeds, the size of selected gaps, and temporal safety margins. A moderately challenging secondary task, in contrast, significantly reduced both play experience and the feeling of presence, but did not influence crossing behavior. As expected, game elements enhanced play experience, whereas reports of presence did not change significantly. Although the mechanisms employed arguably differ from naturalistic sources of motivation and distraction, our findings underline that individual incentives and attentional resources are relevant to future experimental designs. The disagreement of subjective and objective responses further implies that changes in presence do not equate an adjustment of behavioral reactions.
Article
This paper investigated the influence of VR-entertainment systems on passenger and entertainment experience in vehicles with smooth movements. To simulate an autonomous driving scenario, a tablet and a mobile VR-HMD were evaluated in a dynamic driving simulator. Passenger, user and entertainment experience were measured through questionnaires based on comfort/discomfort, application perception, presence, and simulator sickness. In two experiments, two film sequences with varying formats (2D versus 3D) were presented. In Experiment 1, the established entertainment system (tablet + 2D) was tested against a possible future one (HMD + 3D). The results indicated a significantly more favorable experience for the VR-HMD application in the dimensions of user experience (UX) and presence, as well as low simulator sickness values. In Experiment 2, the film format was held constant (2D), and only the device (tablet versus HMD) was varied. There was a significant difference in all constructs, which points to a positive reception of the HMD. Additional analyses of the HMD device data for both experiments showed that the device and not the film format contributed to the favorable experience with the HMD. Additionally, the framework to evaluate the new application context of VR as an entertainment system in autonomous vehicles was discussed.
Conference Paper
Developing and evaluating automotive user interfaces and driver assistance systems in real vehicles and in simulators is an effortful and costly process. To reduce the effort in defining and conducting user studies, this demo introduces the concept simulator K3F. K3F is a highly flexible driving simulation environment that allows a quick and easy modification and exchange of its hard- and software components including simulation software, dashboard/infotainment, and peripheral systems. The K3F software supports a convenient setup of user studies and a comprehensive collection of data. It simplifies the creation and modification of virtual environments and the development of real-world street scenarios. The integration of a driver model based on the ACT-R cognitive architecture allows not only the explanation of observed behavior, but also the prediction of human driving patterns.
Conference Paper
The performance of vehicle control tasks greatly depends on the driver's ability to acquire visual information about the vehicle's location on the road. Advances in sensory and display technologies are making it possible to effectively assist the driver in information acquisition by accurately detecting the vehicle's current location and dynamically providing feedback to the driver to enhance driving performance. In this paper, we use simulated driving studies to investigate how to improve a driver's control of the vehicle by presenting virtual visual targets. We also investigate individual differences in performance enhancement based on the drivers' field of view (FOV). Our experimental results suggest that the effect of virtual visual targets improves vehicle control and eye movement. We also present behavior differences between two subject groups: narrow and wide FOV groups.
Conference Paper
Full windshield displays (WSDs) have the potential to present imagery across the windshield. Current knowledge on display location has not investigated translucent displays at high eccentricities from the driver's forward view. A simulator study (n=26) was conducted aiming to, (a) investigate the effects of Head-Up Display (HUD) location across the entire windshield on driving performance, and (b) better understand how the visual demand for a complex HUD imagery differs from that for a Head-Down Display (HDD). Lane-keeping was poorer when HUD imagery was furthest from the driver (and for the HDD compared to the HUD). Equally, counts of "unacceptable" driving behaviour were greater for displays furthest from the driver's forward view. Furthermore, drivers preferred HUD imagery that was closer to them. The results indicate that HUD evaluations should account for image location, because of how driver gaze location can impact lateral driving performance.
Conference Paper
Interactions between autonomous vehicles (AV) and pedestrians remain an ongoing area of research within the AutoUI community and beyond. Given the challenge of conducting studies to understand and prototype these interactions, we propose a combined full-day workshop and tutorial on how to conduct field experiments and controlled experiments using Wizard-of-Oz (WoZ) protocols. We will discuss strengths and weaknesses of these approaches based on practical experiences and describe challenges we have faced. After diving into the intricacies of different experiment designs, we will encourage participants to engage in hands-on exercises that will explore new ways to answer future research questions.