Conference PaperPDF Available

Abstract and Figures

Take-over situations in highly automated driving occur when drivers have to take over vehicle control due to automation shortcomings. Due to high visual processing demand of the driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance.
Content may be subject to copyright.
Assisting Drivers with Ambient Take-Over Requests in
Highly Automated Driving
Shadan Sadeghian Borojeni1, Lewis Chuang2, Wilko Heuten1, Susanne Boll3
1Interactive Systems Group
OFFIS - Institute for IT
Oldenburg, Germany
wilko.heuten@offis.de
2Max Planck Institute for
Biological Cybernetics
Tuebingen, Germany
lewis.chuang@tuebingen.mpg.de
3Media Informatics and
Multimedia Systems
University of Oldenburg
Oldenburg, Germany
firstname.lastname@uol.de
ABSTRACT
Take-over situations in highly automated driving occur when
drivers have to take over vehicle control due to automation
shortcomings. Due to high visual processing demand of the
driving task and time limitation of a take-over maneuver, ap-
propriate user interface designs for take-over requests (TOR)
are needed. In this paper, we propose applying ambient TORs,
which address the peripheral vision of a driver. Conducting
an experiment in a driving simulator, we tested a) ambient
displays as TORs, b) whether contextual information could
be conveyed through ambient TORs, and c) if the presenta-
tion pattern (static, moving) of the contextual TORs has an
effect on take-over behavior. Results showed that conveying
contextual information through ambient displays led to shorter
reaction times and longer times to collision without increasing
the workload. The presentation pattern however, did not have
an effect on take-over performance.
ACM Classification Keywords
H.5.m. Information Interfaces and Presentation (e.g. HCI):
Miscellaneous
Author Keywords
Ambient light display; attention; automated driving; driver;
hand over; take-over request.
INTRODUCTION
With the rise of advanced driving assistant systems (ADAS),
highly automated driving has become more probable. In the
past years automotive industry has introduced automated vehi-
cles as the next disruptive innovation in the market. However,
according to NHTSA [1], the transition from manual to auto-
mated driving happens gradually and in different levels. They
predict that, the next generation of vehicles will be at level 3 of
automation which is "Limited Self Driving Automation". This
means that the driver does not need to monitor the roadway
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from Permissions@acm.org.
Automotive’UI 16, October 24-26, 2016, Ann Arbor, MI, USA
© 2016 ACM. ISBN 978-1-4503-4533-0/16/10.. .$15.00
DOI: http://dx.doi.org/10.1145/3003715.3005409
Figure 1: Exploring peripheral visual cues as take-over
requests
constantly but still be available to take over control in occa-
sional cases of hazard with sufficiently comfortable transition
times. These situations are known as take-over situations [20].
Highly automated driving can increase drivers’ safety due to
reduction of human failures, and comfort for relieving them
from driving task [5]. This can change the role of the drivers
and encourage them to engage themselves in more secondary
tasks such as reading, surfing the internet, etc. [7]. Conse-
quently, the in-vehicle environment becomes a multi-tasked
context where drivers switch between driving and their sec-
ondary tasks.
In take-over situations, drivers have to switch from their sec-
ondary task to the driving task. During this switch, they have
to shift their attention from one task to the other, which re-
quires perceiving the state of the driving environment, make
decisions, and act accordingly. Being engaged with secondary
tasks can cause drivers to be vulnerable to delays or errors
when getting back to the driving task due to not having a
chance for attending to it and not being in the loop [16]. This
can lead to hazardous situations if the driver has to take over ve-
hicle’s control due to automation shortcomings. Therefore, ap-
propriate user interface designs for Take-over Requests (TOR)
are required to ensure smooth transition from secondary to
driving task.
Researchers have run experiments on design, effectiveness
and timings of take-over requests [20, 6, 18, 15]. However,
to our knowledge, the existing work has investigated the ef-
fectiveness of TORs in isolation and without considering the
multi-tasked context of driving and high amount of visual in-
formation that drivers have to process. A futuristic in-vehicle
environment that relies heavily on control of automation raises
a vital question: How can we support a driver’s ability to seam-
lessly switch from engaging with a non-vehicle-handling task
to monitor and/or resume the diverse complex maneuvers that
constitute effective vehicle handling?
One aspect that addresses this question is the scheduling of
visual information processing. In a dynamic and multi-tasked
environment like driving, it is vital to define methodologies for
effective communication of information by visual computing
i.e. the process of taking complex data and presenting it in
visual format that is easy to understand.
In this work, we follow an approach to support take-over
situations using ambient visual displays. Our work in this
paper yields three main research contributions:
first, we show that while having audio cues to prime users
with the urgency of take-over situation, locating the visual
cue in the periphery (namely, a peripheral light display) can
reduce mental workload and assist safe maneuvers
second, our designed light display can convey contextual
information to assist steering at take-over situations
third, using different light patterns for presenting contextual
information can have an effect on driving behavior.
The rest of this paper is structured as follows: first, we provide
the background and survey of relevant related work, followed
by our approach for designing peripheral light display TORs.
Then, we present the details of our experiment design, fol-
lowed by our results and discussion. Finally, we conclude and
provide an outlook for future work.
RELATED WORK
Designing TORs has been the topic of research in recent years.
Gold et. al [6] have run an experiment on timings of presenta-
tion of the TORs. They showed that shorter take-over times
result in faster but worse reactions. Koo et al. have explored
verbal message contents for situations when the automation
takes over control from the driver. Their results showed that
"how messages" (messages describing actions of automation)
were less preferred and led to worse driving performance in
comparison to "why messages" (messages describing reasons
for automation actions) [9]. Naujoks et al. have run studies on
designing TORs in auditory and visual modalities. They used
a symbol showing hands on steering wheel in the information
cluster display for the visual message. They found out that
audio-visual messages result in shorter reaction time and better
lateral control of the vehicle than only visual messages [18].
Baldwin & More studied use of different words to convey
different levels of urgency to drivers, they found out that the
word "danger" gain higher rating of urgency than the words
"warning" and "caution" [2]. Based on this work, Politis et al.
ran an experiment of multi-modal language based warnings for
take-over situation [20]. They investigated all combinations of
audio, visual, and tactile warnings, and measured perceived ur-
gency, annoyance and the effectiveness of the warnings. Their
results indicated that increased number of modalities enhance
perceived urgency and annoyance, higher urgency warnings
result in shorter transition time, and unimodal visual warnings
result in poor driving behavior.
All the studies above have used visual cues which are presented
in forms of icons/symbols on the information cluster or head-
up displays. These messages require focal visual attention
which is used for fine detail and pattern recognition. This is
the same visual channel required for performing the visual
tracking task while driving. Considering the time limitation in
take-over situation, according to Wickens’ multiple resource
theory, this can create a bottleneck in the time sharing of visual
processing tasks [24]. Therefore, in this work, we propose
using peripheral light displays for conveying take over request
to drivers. According to Leibowitz et al. [12], peripheral
displays address peripheral vision, which demands separate
resources as focal vision in terms of supporting efficient time
sharing.
Peripheral displays, convey information without demanding
much attention or cognitive effort and are not involved with
users’ primary task [14, 23]. Due to these characteristics, and
high visual workload in driving context, researchers have been
studying the application of peripheral displays in cars. Laquai
et al. [11] used ambient light displays to indicate future decel-
eration. Langlois et al. [10] used peripheral light reflected on
the windshield to match problems signaled by advance driv-
ing assistant systems; this display indicated information such
as distance alerts and lateral deviation. Loecken et al. [13]
applied ambient light for a lane change assistant indicating
the distance to other cars and the appropriate time to overtake.
Meschtscherjakov et al. [17] used ambient light displays to
assist drivers to maintain a predefined speed. These works
show that, peripheral light displays can be used as a means to
convey information in a non-obtrusive fashion to drivers.
DESIGNING AMBIENT TAKE-OVER REQUEST
As mentioned earlier, TORs perform their role in two stages:
first, they shift the drivers’ attention away from the secondary
task, and second, they provide them with information to per-
form appropriate actions. Based on NHTSA definition of level
3 of automation, we assume take-over situation as situations
where the driver has to take back the control of the vehicle
due to automation shortcomings in sufficiently comfortable
transition times [1]. Therefore, in the first stage, TORs require
to be salient enough to attract drivers’ attention. According
to Politis et al. and Baldwin et al. [21, 2], the multimodal
cues convey higher level of urgency than unimodal ones. This
perception of urgency however, does not always improve per-
formance. Work by Baldwin & May shows that collision
avoidance was best with warnings with not too high or too low
level of perceived urgency [3].
Based on these results, we decided to design our TOR as a
bimodal display to add one more modality to our ambient vi-
sual TOR to communicate the urgency of the situation without
stressing the drivers. Therefore, we chose to have an auditory
cue presented simultaneously with the light cue. Auditory
warnings are recognised faster and are rated less annoying
[21]. In line with [18], we used a sine wave tone with 1000HZ
frequency and duration of 250 ms.
The light cues were presented via an illuminated LED strip
located behind the steering wheel. Our first goal in this work
was to investigate whether ambient light displays can be effec-
tive when used to prime a take-over situation. Therefore, we
decided to light up the whole LED strip, simultaneously with
the auditory cue to be an indication for the situation.
In a fully automated scenario, when drivers have sufficient
trust in automation, it can be foreseen that they engage them-
selves more in secondary tasks. Therefore, a take-over maneu-
ver not only requires disengagement from the secondary task,
but also does demand taking them back to the loop. The second
goal of this work was to pursue whether presenting contextual
information as TORs to drivers, affects their performance. We
aimed to investigate whether contextual information can be
encoded and conveyed through light cues, and assist drivers to
get back to the loop and be aware of the situation. Thus, we
designed light cues which not only inform drivers about an
upcoming take-over situation, but also do hint them about the
steering direction when there’s an obstacle on the way.
Our third objective, which is built on top of the second one,
was to investigate whether the presentation pattern of the light
cues have an effect on drivers’ performance. We designed
two different patterns for shifting the visual attention of the
driver by hinting the direction of the obstacle: 1) static light 2)
moving light. According to Neville & Lawson [19], moving
stimuli has significantly larger effect for shifting peripheral
attention than stationary stimuli. Thus, our assumption is that,
information presented by moving light is perceived better than
static light.
METHOD
Equipment
Take over requests were examined in a driving simulator to
ensure repeatable outcomes for comparison, and to minimize
the risk to drivers engaged in a distracting task on-road. A
fixed-based right-hand traffic driving simulator with a field
of vision of 150
was used. The simulation was created with
SILAB
1
. Auditory cues were also played simultaneously
from speakers built in the driving simulator, located behind
the driver on both sides.
As mentioned before, peripheral light displays were chosen to
present visual cues. In this study, an Adafruit NeoPixel Digital
RGB LED strip with a resolution of 144 LEDs per meter was
used. To reduce the intensity of the light display, the LED strip
was placed in a matte white acrylic LED profile. The frame
was located on the dashboard of the driving simulator behind
the steering wheel, 65 degrees from fixation on the tablet pc
presenting the 1-back task, which was on the drivers’ laps.
To detect the eye gaze of the participants during the experi-
ment, they were asked to wear Dikablis Glasses by Ergoneers
1https://wivw.de/en/silab
2
. The eye-tracker was calibrated before each trial to ensure
constant track of eye-gaze behavior. Two physical markers
on the front panel and two virtual ones on the simulator dis-
plays were used. The calibration procedure took between 30
seconds to one minute for each trial. We used the standard eye-
tracker software for calibration, video recording and analysis
of participants’ eye-gaze.
Participants
Twenty one (10 female, 11 male) students and research assis-
tants (M = 26.33, SD = 4.02) took part in our experiment. They
had between 1 to 19 years of driving experience. Participation
in this study was voluntarily and participants were rewarded
e15 for one and half hour.
Experiment Design and Measures
In our experiment, we chose a repeated measures within-
subject design. This was done in order to ensure that all
participants underwent all the light cue conditions. We had
one independent variable which was the light condition. In all
conditions we measured the following dependent variables:
Reaction time (RT): the time between presentation of the
TOR and first steering action,
Time to collision (TTC) to obstacle: the time between the
lane change maneuver and collision to the road block,
Workload: perceived workload ratings by using NASA Raw
Task Load Index (RTLX) [4]; 6 scales and overall task load
index, and
Eye-gaze behavior: number and duration of glances at the
light display when the TORs were presented
We also collected qualitative feedback from the participants
about the light displays and their influence on take-over task.
Procedure
Driving Task
The driving scenario was taking place in a highly automated
car which was driving in the center lane of a 3-lane highway.
The participants were asked to activate the automation in the
beginning of each trial by pulling the cruise control lever.
We asked the participants to start with an 1-back task on the
tablet PC located on their laps having their hands on both
sides of it, as soon as the automation was activated. They
were encouraged to perform their best at the 1-back task and
keep their eyes on it. In 30-40 second intervals a light and an
audio cue was presented as TOR, informing them of a road
block (a truck with road construction signs and alerts parked
on the road) on either left or the right lane. The TORs were
presented at 5 seconds TTC to the road block, in line with
Gold et al. [6]. Due to this obstacle, the other cars on the road
were changing their lanes, putting the drivers in the situation
to either drive to the lane opposite to the block or brake. We
asked our participant to change lane in all trials and just use the
brakes in cases where they could not perform the lane-change
maneuver. As soon as the drivers moved the steering wheel,
2http://www.ergoneers.com
Figure 2: Spatial 1-back task: if the position of the lit up
square is the same as the time before, the button "posi-
tion" should be pressed.
the control of the vehicle was given back to them. After they
have passed the road obstacle, they had to switch back to the
center lane, activate the automation, and continue the 1-back
task (Figure 1).
Secondary Task
The secondary task was spatial 1-back task presented in an
android application
3
. The participant performed this task on a
tablet PC which they laid on their laps. The task consisted of
a 3
×
3 table. In one second intervals, one of the cells was lit
up for duration of one second. When the lit up cell was at the
same position as the previous one, they had to press a button
labeled "position". This task was chosen since it requires
focus of attention and increases working memory load. To
have participants to be engaged in this task, we encouraged
them to perform their best by showing them their records
(Figure 2).
Conditions
Three test conditions were defined for each of our TOR de-
signs:
Baseline:
The whole light display was turned on in red
4
(in line with [20]), informing the driving of the upcoming
take-over situation(Figure 3a)
Static Light:
The left or the right half of the light display
was turned on in red informing the driving of the upcoming
take-over situation and the location of road block (Figure
3b)
Moving Light:
The left or the right half of the light display
was turned on in red with a moving pattern informing the
driving of the upcoming take-over situation and the location
of road block (Figure 3c). Since the LED strip had 144
LEDs and take over requests were presented at 5 seconds
TTC, for the moving pattern 10 of the 72 LEDs (half of the
display) were going on with a delay of 12 ms and iterating
through the display.
Each of these conditions consisted of 30 trials (take-over situ-
ations). To avoid bias caused by learning, the order of condi-
tions was counterbalanced between participants.
3https://play.google.com/store/apps/details?id=cz.wie.p.
nback
4
Red = RGB(255,0,0), brightness = 10 (0 being the dimmest and 255
the maximum brightness)
Pilot Study
Prior to our experiment, we ran a pilot study with 5 partici-
pants to examine the experiment setup. Our observations and
qualitative feedback of the participant showed that having light
direction in conditions 2 and 3, hinting the location of the ob-
stacle was not found intuitive. All participants steered towards
the direction that the light was cuing. This was despite the
fact that the light was presented in red which is an indication
for danger or hazard. Therefore, we redefined conditions 2
and 3 as: the left or the right half of the light display was
turned on in red (with a moving pattern for moving condition)
informing the drivers of the upcoming take-over situation and
the direction to steer to (Figure 3). Based on qualitative results
collected from this study, the location, brightness of the light
display, and the position of the tablet PC were also optimized.
(a)
Baseline: all LEDs
turn on
(b)
Static light: half of
the LEDs turn on to
hint steering direction
(c)
Moving light: a
number of LEDs iter-
atively turn on to hint
steering direction
Figure 3: Light conditions: all images direct to the left
side
Training Time
Each participant had between 3 to 5 minutes training time
to get familiarized with both 1-back task on the tablet PC
and driving and take-over task. They performed both tasks in
parallel (exactly as defined in the experiment scenario) for 2
to 5 trails until they felt comfortable with the tasks.
Hypotheses
As mentioned earlier, we had three main objectives in this
work: first, to investigate whether ambient light displays could
be used as TORs; second, whether conveying contextual in-
formation in ambient light TORs can improve take-over ma-
neuvers, third, whether the light pattern used for conveying
contextual information can have an effect on take-over. Due
to the conveyed contextual information in static and moving
light conditions, we assumed that it simplifies decision making
at take-over situations and thus will result in shorter reaction
times and less risky maneuvers than baseline condition. As
mentioned in design section, moving stimuli is proved to be
perceived better than stationary ones. Therefore, we hypothe-
sized that the reactions in moving light condition are faster and
less risky than in static light condition. Similarly, for perceived
overall task workload, we assumed lower ratings for static and
moving conditions due to their assistant functionality; and
lower ratings for moving condition in comparison with static,
because of easier perception of movement. Following, is the
list of hypotheses tested in our experiment:
H1:
the reaction time in static and moving light conditions
will be less than baseline condition
H2:
the reaction time in moving light condition will be less
than static condition
H3:
the time to collision to obstacle in static and moving
light condition will be more than baseline condition
H4:
the time to collision to obstacle in moving light condi-
tion will be more than static condition
H5:
the overall NASA RTLX in baseline condition is higher
than static and moving light condition
H6:
the overall NASA RTLX in static condition is higher
than moving condition
RESULTS
Reaction Time and TTC
In all trials we measured the reaction time defined as the time
between presentation of TOR and first steering action taken
by participants. Figure 4 (left), shows the reaction times in
milliseconds in the three light conditions. We also measured
the time to collision to obstacle (TTC) defined as time between
the lane change maneuver and the road block. This measure
was used to observe the driving behavior of participants in
different light conditions and how risky their maneuvers are.
Figure 4 (right), shows the TTC means in seconds in all light
conditions.
There was a main effect of cue-type for TTC
(F2,38 =7.70,p=
0.002,ω2=0.246)
and RTs
(F2,38 =7.46,p=0.002,ω2=
0.240)
. Post-hoc Tukey HSD tests on both measures revealed
that both static and moving cue conditions were significantly
different from the baseline cue condition but not from each
other. We performed a two-sided Bayesian hypothesis test
(with a default Cauchy prior width of 0.707) to quantify the
hypothesis that there is no performance difference between the
static and moving cue conditions. The Bayes Factor (BF01)
was 4.30 for TTC (which means that the null effect hypothesis
is 4.3 (or 3.58) times more likely to be true than the alternative
hypotheses) and 3.58 for RTs. This represents ’substantial’
evidence that both types of cues are no different with regards
to their performance benefits [8, 22] .
Figure 4: Reaction time means (left), and TTC means in
three light conditions(right)
NASA Raw Task Load Index
After performing the tasks in each condition, we asked partici-
pants to fill in NASA RTLX questionnaire and rate perceived
workload of the task. The overall RTLX results showed re-
quired workload in performing the tasks in moving light con-
dition (M = 27.14, SD = 16.32) was less than static light con-
dition (M = 28.57, SD = 16.03), which was less than baseline
condition (M = 33.21, SD = 12.19). Mauchy’s test indicated
that the assumption of sphericity was violated ,
χ2
(2)= 3 , p
= 0.223. After using Huynh Feldt corrections (
ε
= 0.94), the
ANOVA test results showed no significant difference F(1.89,
37.95) = 2.16 , p = 0.132. Figure 5 shows the means of all
six RTLX scales in the three light conditions. We can see
that the moving light, except for in frustration and temporal
demand scales was rated the lowest (in performance, lower
value indicates better performance). In conditions where the
TOR light was cuing the drivers with the steering direction
(static and moving), the temporal demand is the scale with
highest value. This scale however, in the baseline condition is
the second highest after mental demand. This can indicate that,
cuing drivers with the steering direction can reduce mental
workload for task switching and decision making.
Figure 5: Nasa RTLX results for the three TOR light con-
ditions. E: effort, F: frustration, MD: mental demand,
P: Performance, PD: physical demand, TD: temporal de-
mand
EYETRACKING ANALYSIS
In all three light conditions, we measured the number of
glances on the light display when TOR was presented, we
also measured duration of glances in these conditions. Figure
6 shows means and standard deviations of number of glances
and duration of glances in all three light conditions.
We observed no main effect of cue type on duration of glances
(F2,38 =2.235,p=0.121,ω2=0.057)
and the number of
glances,
(F2,38 =3.088,p=0.057,ω2=0.092)
. We per-
formed a JZS Bayesian t-tests in order to understand how the
manipulated cues of static and moving compared to the base-
line. In terms of number of glances, the static cue (BF01=0.21)
was more likely, than the moving cue (BF01=0.49), to be dif-
ferent from the baseline. In addition, the mean duration of
these glances were more likely to be different for the static
cue (BF01=0.8), than the moving cue (BF01=1.66), to the
baseline.
Using the labels provided by [22], we have ’substantial’ evi-
dence that static cues attract more glances than the baseline but
only ’anecdotal’ evidence for moving cues. Furthermore, we
have ’anecdotal’ evidence that moving cues result in glances
that have similar duration lengths as our baseline cues, and
’anecdotal’ evidence that static cues result in longer glances.
Figure 6: Means and standard deviations for number of
glances (right) and duration of glances (left)
Qualitative Feedback
After the participants have performed the tasks under the three
conditions, we interviewed them about how they find the light
display, if they want to use it as TOR, and what problems did
they see using it.
All participants said that they find the light displays useful
and in general not annoying or stressing. They mentioned
that having light in the periphery together with the auditory
cue, attracts their attention faster to the handover task. 85%
of the participants preferred conditions with contextual cuing
(static and moving) more helpful than the baseline light. They
mentioned "it saves time scanning the road and seeing what
is wrong and what I have to do". Between the static and
moving lights, the moving light was preferred (71%), "I felt
the movement of the light was instructing me to steer to the
direction" one participant mentioned. The participants who
preferred the static light mentioned that it was less distracting
and annoying than the moving light. One of the participants
said that "the moving light stresses me because it reminds me
of police or ambulance alerts".
DISCUSSION
Ambient Light as TOR
The main objective of this work was to investigate ambient
light cues as take over requests. Our motivation for using these
displays, was the high demand on focal visual resource in driv-
ing context. Based on Wickens multiple resource theory, we
designed an ambient display, which addresses the peripheral
vision of drivers. We defined three light conditions one of
which (baseline) basically was meant to shift the attention of
the driver from a secondary task to the take-over situation;
and the other two were designed for conveying contextual
information about the traffic situation, in addition to attention
shifting functionality.
In our experiment, we measured reaction time as a measure for
perception and understanding of TORs, and TTC as a factor
of driving behavior. Results showed that the conditions in
which displays convey contextual information about driving
environment (static, moving), led to shorter reaction times
and safer driving. These results confirm our first and third
hypotheses (H1, H3). Based on this, we can conclude that the
conveyed information can increase situation awareness and
assist drivers in decision making and performing appropriate
actions. The overall perceived workload was also rated less in
these conditions than the baseline, however the difference was
not statistically significant. This implies that, ambient displays
could support drivers with shifting their attention from a sec-
ondary task and in the meanwhile convey information about
the upcoming maneuver at take-over time, without increasing
the workload. This is in consonance with our fifth hypothesis
(H5).
We experimented two different light patterns for conveying
information to the driver. We drew on related work and in
addition to static light, designed a moving light cue. Results
indicated that, unlike our assumption in hypotheses H2 and
H4, the light pattern of the presented cue does not have an
effect on reaction times and TTC. The ratings of perceived
workload however, show that despite creating more frustration,
the moving pattern is less demanding. The overall ratings of
perceived workload between these two conditions nonetheless,
did not have a significant difference which refutes our sixth
hypothesis (H6).
Ambient Displays and Peripheral Vision
We used ambient displays, to address the peripheral vision of
drivers to convey information about their upcoming take-over
situation without overloading their focal resources.
Using the labels provided by [22], we have ’substantial’ evi-
dence that static cues attract more glances than the baseline but
only ’anecdotal’ evidence for moving cues. Furthermore, we
have ’anecdotal’ evidence that moving cues result in glances
that have similar duration lengths as our baseline cues, and
’anecdotal’ evidence that static cues result in longer glances.
One reason for this suggestive evidence could be small effect
size. Future research should focus on these plausible differ-
ences and be specifically designed to study how the strength
of static or moving cues could vary in terms of the glances
that they attract away from the outside world. Unfortunately,
this current study was not explicitly designed for this purpose.
These results reveal that using moving peripheral cues as TORs
are optimal; they prime drivers with contextual information
and result in fewer glances for shorter duration in comparison
with the static condition. Despite the fact that the baseline
cue led to the least number and duration of glances, given the
results from driving behavior (reaction times and TTC), it does
not surpass the moving cue in terms of effect on performance.
Conveying Information at Take-over Situations
In this work, we visualized the steering direction at take-over
situation by using ambient light, given the location of the road
block. Our first idea was to shift drivers’ attention to where the
road block is (what to avoid). However, during our pilot study,
we observed that, users perceived the displayed direction, as
the direction to steer to (what to do) despite having a red color
on the display-which is an indicator of danger or avoidance.
This led us to the conclusion that, users expect the system to
direct them to what to do in hazardous situations or decision
making bottlenecks. Therefore, we changed our set up to have
the light to direct users to where to steer to. The question
of "What information should be communicated to the user at
take-over situation to support a smooth transition?" is however,
not in the scope of this work but requires further research.
LIMITATIONS
As in early in-vehicle prototype evaluations, our study was
run in a driving simulator under a specific road scenario. Real
driving conditions however, can be more dynamic and complex
than what was tested in our work. For example, the traffic,
weather, and road conditions, or the type, duration and level of
engagement in secondary task can be factors that can influence
take-over behavior. However, the goal of this work was to
provide initial validation for whether peripheral light display
cues can be used as take over requests, and we would require
further testing of our approach before deployment in a real
test scenario.
CONCLUSION
Cues for TOR perform their role in two stages. First, attention
disengagement from the non-driving task is required. This is
typically achieved with salient sensory cues. Second, infor-
mation can be provided to cue the appropriate action that the
driver has to perform. In the current study, we employ three
types of cues. A baseline cue that only promotes attentional
disengagement from the non-driving task towards the visual
world, and two cues (static vs. moving) that provides infor-
mation about the action that should be taken. A static cue is
expected attract overt attention to itself while a moving cue is
expected to attract overt attention to the operational context.
This is because the former attracts focal resources while the
latter is processed with ambient resources (Wickens, 2002).
We argue that although improved performance is equivalent
with the use of both cues in terms of response time, ambient
cues are more suitable. They cue focal attention appropriately
to the operational context without capturing focal attention
itself.
FUTURE WORK
This study sought to evaluate the effectiveness of ambient
light cues as take over requests. With the collected results we
showed that, ambient cues which convey contextual informa-
tion can result in shorter reaction times and safer maneuvers
without requiring more workload, and more or longer duration
of glances. However, we only tested the light display which
indicated one information item (steering direction) at take-
over situation. Follow up work should investigate whether
more information about the driving context can be encoded in
ambient light displays to support take-over maneuvers.
ACKNOWLEDGMENT
Lewis Chuang is financially supported by the German
Research Foundation (DFG) within the project C03 of
SFB/Transregio 161.
Special thanks to Lars Weber and Torben Wallbaum for their
assistance in this work.
REFERENCES
1. National Highway Traffic Safety Administration and
others. 2013. Preliminary statement of policy concerning
automated vehicles. Washington, DC (2013).
2. Carryl L Baldwin and Bridget A Lewis. 2014. Perceived
urgency mapping across modalities within a driving
context. Applied ergonomics 45, 5 (2014), 1270–1277.
3. Carryl L Baldwin and Jennifer F May. 2011. Loudness
interacts with semantics in auditory warnings to impact
rear-end collisions. Transportation research part F:
traffic psychology and behaviour 14, 1 (2011), 36–42.
4. James C Byers, AC Bittner, and SG Hill. 1989.
Traditional and raw task load index (TLX) correlations:
Are paired comparisons necessary. Advances in industrial
ergonomics and safety I (1989), 481–485.
5. Daniel Damböck, Thomas Weißgerber, Martin Kienle,
and Klaus Bengler. 2012. Evaluation of a contact analog
head-up display for highly automated driving. In 4th
International Conference on Applied Human Factors and
Ergonomics. San Francisco. USA.
6.
Christian Gold, Daniel Damböck, Lutz Lorenz, and Klaus
Bengler. 2013. "Take over!" How long does it take to get
the driver back into the loop?. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting,
Vol. 57. SAGE Publications, 1938–1942.
7.
PA Hancock, Lisa Simmons, L Hashemi, H Howarth, and
T Ranney. 1999. The effects of in-vehicle distraction on
driver response during a crucial driving maneuver.
Transportation Human Factors 1, 4 (1999), 295–309.
8.
Robert E Kass and Adrian E Raftery. 1995. Bayes factors.
Journal of the american statistical association 90, 430
(1995), 773–795.
9. Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert,
Larry Leifer, and Clifford Nass. 2015. Why did my car
just do that? Explaining semi-autonomous driving actions
to improve driver understanding, trust, and performance.
International Journal on Interactive Design and
Manufacturing (IJIDeM) 9, 4 (2015), 269–275.
10. Sabine Langlois. 2013. ADAS HMI Using Peripheral
Vision. In Proceedings of the 5th International
Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ’13).
ACM, New York, NY, USA, 74–81.
11. Florian Laquai, Fabian Chowanetz, and Gerhard Rigoll.
2011. A large-scale LED array to support anticipatory
driving. In Systems, Man, and Cybernetics (SMC), 2011
IEEE International Conference on. IEEE, 2087–2092.
12. HW Leibowitz, RB Post, Th Brandt, and J Dichgans.
1982. Implications of recent developments in dynamic
spatial orientation and visual resolution for vehicle
guidance. In Tutorials on motion perception. Springer,
231–260.
13. Andreas Löcken, Heiko Müller, Wilko Heuten, and
Susanne Boll. 2015. An experiment on ambient light
patterns to support lane change decisions. In Intelligent
Vehicles Symposium (IV). IEEE, 505–510.
14. Tara Matthews, Anind K. Dey, Jennifer Mankoff, Scott
Carter, and Tye Rattenbury. 2004. A Toolkit for
Managing User Attention in Peripheral Displays. In
Proceedings of the 17th Annual ACM Symposium on User
Interface Software and Technology (UIST ’04). ACM,
New York, NY, USA, 247–256.
15. Natasha Merat, A Hamish Jamson, Frank CH Lai,
Michael Daly, and Oliver MJ Carsten. 2014. Transition to
manual: Driver behaviour when resuming control from a
highly automated vehicle. Transportation research part F:
traffic psychology and behaviour 27 (2014), 274–282.
16. Natasha Merat and John D Lee. 2012. Preface to the
special section on human factors and automation in
vehicles designing highly automated vehicles with the
driver in mind. Human Factors: The Journal of the
Human Factors and Ergonomics Society 54, 5 (2012),
681–686.
17. Alexander Meschtscherjakov, Christine Döttlinger,
Christina Rödel, and Manfred Tscheligi. 2015.
ChaseLight: Ambient LED Stripes to Control Driving
Speed. In Proceedings of the 7th International
Conference on Automotive User Interfaces and
Interactive Vehicular Applications (AutomotiveUI ’15).
ACM, New York, NY, USA, 212–219.
18. Frederik Naujoks, Christoph Mai, and Alexandra
Neukum. 2014. The effect of urgency of take-over
requests during highly automated driving under
distraction conditions. Advances in Human Aspects of
Transportation Part I (2014), 431.
19. Helen J. Neville and Donald Lawson. 1987. Attention to
central and peripheral visual space in a movement
detection task: an event-related potential and behavioral
study. I. Normal hearing adults. Brain Research 405, 2
(1987), 253 – 267.
20. Ioannis Politis, Stephen Brewster, and Frank Pollick.
2015. Language-based multimodal displays for the
handover of control in autonomous cars. In Proceedings
of the 7th International Conference on Automotive User
Interfaces and Interactive Vehicular Applications. ACM,
3–10.
21. Ioannis Politis, Stephen A Brewster, and Frank Pollick.
2014. Evaluating multimodal driver displays under
varying situational urgency. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems. ACM, 4067–4076.
22.
Eric-Jan Wagenmakers, Ruud Wetzels, Denny Borsboom,
and Han LJ van der Maas. 2011. Why psychologists must
change the way they analyze their data: the case of psi:
comment on Bem. (2011), 426.
23. Mark Weiser and John Seely Brown. 1996. Designing
calm technology. PowerGrid Journal 1, 1 (1996), 75–85.
24. Christopher D Wickens. 2002. Multiple resources and
performance prediction. Theoretical issues in ergonomics
science 3, 2 (2002), 159–177.
... In the context of automated driving, ambient lights are developed to provide warning signals and in most cases, cues for the driver to action, e.g. aiming to reduce takeover times without increasing the workload [3,28] or increasing trust [24,30]. During highly automated driving, the peripheral cues could be beneficial for overcoming motion sickness [13]. ...
... Rather, the animated cue seemed to only distract even though it was kept short and only at the beginning (afterwards it was identical to LCS). These results are in line with findings of Borojeni et al. [3], who also found that presentation patterns have no effect on takeover performance. We surmise that contrary to what we assumed, a static display is sufficiently visible and adding animations is an attempt to address a problem that does not actually exist, thus worsening the interaction. ...
... It should be noted that the workload measured via the NASA-TLX was similar for all three conditions, so while LCA was not ideal, the conservative animation approach seemed to cause more annoyance and subjective annoyance but had no measurable impact on workload. Borojeni et al. [3] reported that moving patterns were perceived less demanding than static ones, but could not find a significant difference between the two options. ...
Article
Full-text available
Driving an automated vehicle requires a clear understanding of its automation capabilities and resulting duties on the driver’s side. This is true across all levels of automation but especially so on SAE levels 3 and below, where the driver has an active driving task performance and/or monitoring role. If the automation capabilities and a driver’s understanding of them do not match, misuse can occur, resulting in decreased safety. In this paper, we present the results from a simulator study that investigated driving mode awareness support via ambient lights across automation levels 0, 2, and 3. We found lights in the steering wheel to be useful for momentary and lights below the windshield for permanent indication of automation-relevant information, whereas lights in the footwell showed to have little to no positive effects on driving mode awareness.
... Regarding the user interfaces (UIs) for issuing the TOR, researchers first explored the appropriate modalities. Auditory interfaces (beeps) are known to provoke the fastest reactions [30], while tactile patterns [31][32][33] and ambient light [34,35] reduce drivers' effort and increase their situational awareness. Most concluded that multimodal user interfaces should be used when issuing a TOR [36,37]. ...
... Our analysis could be further generalized by drawing on and comparing the results of other researchers' TO studies or tailored to individual technological systems. While our study focused primarily on identifying factors that reveal stabilization time, future research could explore interventions or strategies aimed at shortening the time it takes for drivers to regain full control after a takeover, similar to studies conducted to find the optimal TO user interface [35,37,46]. For example, Petermeijer et al. showed that auditory TOR stimuli provoked the fastest response [30]. ...
Article
Full-text available
In the realm of conditionally automated driving, understanding the crucial transition phase after a takeover is paramount. This study delves into the concept of post-takeover stabilization by analyzing data recorded in two driving simulator experiments. By analyzing both driving and physiological signals, we investigate the time required for the driver to regain full control and adapt to the dynamic driving task following automation. Our findings show that the stabilization time varies between measured parameters. While the drivers achieved driving-related stabilization (winding, speed) in eight to ten seconds, physiological parameters (heart rate, phasic skin conductance) exhibited a prolonged response. By elucidating the temporal and cognitive dynamics underlying the stabilization process, our results pave the way for the development of more effective and user-friendly automated driving systems, ultimately enhancing safety and driving experience on the roads.
... Early research focused on the relative merits of visual, auditory, or tactile signals [6][7][8][9][10][11][12]. However, it is challenging to conclude simply that one modality is more advantageous in takeover situations than another is. ...
... An important role of the TOR is to deliver contextual information to the driver for smooth changes in control [13,67]. Previous studies have explored which modality is useful for conveying the urgency of a situation because appropriate delivery of the urgency of a situation is an important factor in eliciting a fast response from a driver [7,17,68]. Specifically, multimodal TORs aim to enhance the transition of driver attention and the perception of urgency and situation awareness, by presenting visual and auditory modalities simultaneously [18,30]. In these studies, delivery of urgency was performed via the auditory modality, and visual stimuli were used only to indicate where visual attention should be directed. ...
Article
Full-text available
This study aimed to investigate the effect of a looming visual cue on situation awareness and perceived urgency in response to a takeover request (TOR), and to explore the underlying mechanisms of this effect through three experiments. In Experiment 1, the optimal size and speed of a red disk were determined, which were effective in capturing looming motion and conveying the urgency of the situation. The results indicated that both looming speed and size ratio had significant effects on situation awareness and perceived urgency. In Experiment 2, the effects of looming stimuli were compared with dimming stimuli, and the results showed that the looming visual cue was more effective in promoting perceived urgency and situation awareness. The results also indicated that the looming visual cue attracted more visual attention than the dimming visual cue, in line with previous studies. Experiment 3 utilized a driving simulator to test the effectiveness of the looming visual cue in promoting fast and appropriate responses to TORs in complex driving scenarios. The results showed that the looming visual cue was more effective in promoting perceived urgency and enhancing situation awareness, especially in highly complex driving situations. Overall, the findings suggest that the looming visual cue is a powerful tool for promoting fast and appropriate responses to TORs and enhancing situation awareness, particularly in complex driving scenarios. These results have important implications for designing effective TOR systems and improving driver safety on the road.
... Previous studies [3,8,[10][11][12]15,20,23,25,27,36,39] and the current implementation of the NextPerception HMI were taken into consideration for the design of the Take-Over Request to maintain necessary consistency. The TOR implementation was multimodal, as suggested by the study in [36], utilizing channels already employed for HMI strategies, namely: ...
... Ambient Scene Generation Ambient displays and audio cues are an effective measure to improve TOR quality [32,16]. While current approaches are more or less explicit, we propose that based on the current or upcoming driving situations, an intelligent agent can generate situation-specific ambient scenes. ...
Preprint
Full-text available
As the automotive world moves toward higher levels of driving automation, Level 3 automated driving represents a critical juncture. In Level 3 driving, vehicles can drive alone under limited conditions, but drivers are expected to be ready to take over when the system requests. Assisting the driver to maintain an appropriate level of Situation Awareness (SA) in such contexts becomes a critical task. This position paper explores the potential of Attentive User Interfaces (AUIs) powered by generative Artificial Intelligence (AI) to address this need. Rather than relying on overt notifications, we argue that AUIs based on novel AI technologies such as large language models or diffusion models can be used to improve SA in an unconscious and subtle way without negative effects on drivers overall workload. Accordingly, we propose 5 strategies how generative AIs can be used to improve the quality of takeovers and, ultimately, road safety.
... Such transition of control between a vehicle and a driver is called take-over request. Multiple research studies have explored how to best utilize take-over requests and what may affect the driver's response time [3][4][5]39]. Level-4 (high automation) and Level-5 (full automation) are truly automated driving levels which do not require an invehicle driving. ...
Article
Full-text available
Autonomous vehicles (AVs) are quickly advancing and show promise to be a transformative mode of transportation. However, the prevailing consensus suggests that AVs will not be capable of addressing every traffic situation, necessitating remote human intervention in certain edge case scenarios. To facilitate the development of future teleoperation solutions, it is imperative to establish a clear and comprehensive set of intervention scenarios. To achieve this, we undertook a thorough investigation employing in-depth semi-structured interviews with 14 experts specializing in AV teleoperation. Employing thematic analysis to organize and classify the collected data, our study offers a comprehensive compilation of use cases that may require remote human assistance. By doing so, our findings provide a solid groundwork for the design and implementation of future teleoperation user interfaces. These interfaces may play a crucial role in enabling effective and efficient collaboration between humans and AVs in situations where remote intervention becomes necessary. Ultimately, our research contributes to the advancement of AV technology by identifying critical areas where human involvement can augment the capabilities of autonomous systems, thereby ensuring safer and more reliable transportation solutions.
... Driving, which is commonly considered as a goal-directed activity [17,18], is taken as the sensorimotor task in our psychophysical experiments. Due to its repeatable usability, high safety and good performance, the (head-worn) virtual reality technique has become a popular paradigm to study gaze shifts in sensorimotor tasks [19,[32][33][34]. Therefore, our study is performed based on head-worn virtual reality. ...
Article
Full-text available
Visual scanning is achieved via head motion and gaze movement for visual information acquisition and cognitive processing, which plays a critical role in undertaking common sensorimotor tasks such as driving. The coordination of the head and eyes is an important human behavior to make a key contribution to goal-directed visual scanning and sensorimotor driving. In this paper, we basically investigate the two most common patterns in eye–head coordination: “head motion earlier than eye movement” and “eye movement earlier than head motion”. We utilize bidirectional transfer entropies between head motion and eye movements to determine the existence of these two eye–head coordination patterns. Furthermore, we propose a unidirectional information difference to assess which pattern predominates in head–eye coordination. Additionally, we have discovered a significant correlation between the normalized unidirectional information difference and driving performance. This result not only indicates the influence of eye–head coordination on driving behavior from a computational perspective but also validates the practical significance of our approach utilizing transfer entropy for quantifying eye–head coordination.
Conference Paper
Full-text available
This paper presents a first evaluation of multimodal language-based warnings for handovers of control in autonomous cars. A set of possible handover situations varying in urgency is described. A set of multimodal, language-based warnings for these situations is then introduced. All combinations of audio, tactile and visual warnings for handovers were evaluated in terms of perceived urgency, annoyance and alerting effectiveness. Results showed clear recognition of the warning urgency in this new context, as well as low perceived annoyance overall, and higher perceived effectiveness for critical warnings. The time of transition from self-driving to manual mode in the presence of the warnings was then evaluated. Results showed quicker transitions for highly urgent warnings and poor driving performance for unimodal visual warnings. These results provide a novel set of guidelines for an effective transition of control between car and driver in an autonomous vehicle.
Article
Full-text available
This study explores, in the context of semi-autonomous driving, how the content of the verbalized message accompanying the car’s autonomous action affects the driver’s attitude and safety performance. Using a driving simulator with an auto-braking function, we tested different messages that provided advance explanation of the car’s imminent autonomous action. Messages providing only “how” information describing actions (e.g., “The car is braking”) led to poor driving performance, whereas “why” information describing reasoning for actions (e.g., “Obstacle ahead”) was preferred by drivers and led to better driving performance. Providing both “how and why” resulted in the safest driving performance but increased negative feelings in drivers. These results suggest that, to increase overall safety, car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers.
Conference Paper
Full-text available
Previous studies have investigated audio, visual and tactile driver warnings, indicating the importance of communicating the appropriate level of urgency to the drivers. However, these modalities have never been combined exhaustively and tested under conditions of varying situational urgency to assess their effectiveness both in the presence and absence of critical driving events. This paper describes an experiment evaluating all multimodal combinations of such warnings under two contexts of situational urgency: a lead car braking and not braking. The results showed that participants responded quicker to more urgent warnings, especially in the presence of a car braking. They also responded faster to the multimodal as opposed to unimodal signals. Driving behaviour improved in the presence of the warnings and the absence of a car braking. These results highlight the influence of urgency and number of modalities in warning design and indicate the utility of non-visual warnings in driving.
Conference Paper
Full-text available
Highly automated driving may improve driving comfort and safety in the near future. Due to possible system limits of highly automated driver support, the driver is expected to take over the vehicle control, if a so-called take-over request is issued. One example of these system limits are missing or ending lines on motorways. This study focuses on the design of take-over requests in such situations. Using a motion-based driving simulator, N = 16 participants encountered different take-over situations in congested traffic that varied in their difficulty: ending lines on straight road (easy), temporary lines due to a work zone (moderate) and loss of lines in a situation with high curvature (difficult). The driver support consisted of a hands-off system that was taking over longitudinal and lateral control. Participants were asked to perform a secondary task while driving. Take-over requests were presented either visually or visual-auditory. Drivers’ hands-on times (i.e., time until driver puts hands back on the steering wheel) are lower if visual-auditory take-over requests are used in comparison to purely visual ones. Measures of lateral vehicle control also show an advantage of visual-auditory take-over requests. Differences between the take-over concepts are especially pronounced in difficult take-over situations.
Chapter
The objective of the present chapter is to examine a number of problems of vehicle guidance within the context of recent developments in psychophysics, neurophysiology, anatomy and neurology. The theme of this presentation is that these, like other applied problems, exist primarily when the underlying physiological mechanisms are not well understood. With an increased understanding of fundamentals we can not only better appreciate the nature of applied problems but can also frequently identify methods towards their solution. In turn, applied problems serve a valuable function by directing attention to gaps in our basic understanding which leads to suggestions for fruitful areas for research. At the same time as we learn more about fundamentals, new application possibilities manifest themselves.
Conference Paper
In order to support drivers to maintain a predefined driving speed, we introduce ChaseLight, an in-car system that uses a programmable LED stripe mounted along the A-pillar of a car. The chase light (i.e., stripes of adjacent LEDs that are turned on and off frequently to give the illusion of lights moving along the stripe) provides ambient feedback to the driver about speed. We present a simulator based user study that uses three different types of feedback: (1) chase light with constant speed, (2) with proportional speed (i.e., chase light speed correlates with vehicle speed), and (3) with adaptive speed (i.e., chase light speed adapts to a target speed of the vehicle). Our results show that the adaptive condition is suited best to help a driver to control driving speed. The proportional speed condition resulted in a significantly slower mean speed than the baseline condition (no chase light).
Conference Paper
In the recent years, several automotive manufacturers started to integrate ambient light displays into cars to increase drivers' comfort. Expanding their possible application areas, we propose a display that continuously informs the driver of the vehicle's as well as the environment's state. We studied this display in a lane change maneuver, in which a driver has to decide if he or she can change lane in front of a faster closing car or brake to keep a safe distance to a slower car in front. We present results of an experiment for light patterns that are based on results of a design workshop and definitions for lane change decision aid systems (LCDAS) of ISO 17387. Though we used ISO's definitions for the timings, our participants felt that status updates on the display came too late. In addition, the abrupt warnings, implemented in one of the tested patterns, led to worse performance of the participants. On the other hand, we observed that participants liked a continuous encoding of the time-to-collision (TTC) and observed a decrease in missed opportunities to overtake. Therefore, we argue that the defined limits for the warning levels are not well suited to support drivers during decision making in our scenario. Our contribution lies in a novel way of supporting drivers during lane change using an ambient in-vehicle light display. We showed that a continuous light pattern might help drivers in decision making, while more research has to be done to validate this.
Article
Raising the automation level in cars is an imaginable scenario for the future in order to improve traffic safety. However, as long as there are situations that cannot be handled by the automation, the driver has to be enabled to take over the driving task in a safe manner. The focus of the current study is to understand at which point in time a driver’s attention must be directed back to the driving task. To investigate this issue, an experiment was conducted in a dynamic driving simulator and two take-over times were examined and compared to manual driving. The conditions of the experiment were designed to examine the take-over process of inattentive drivers engaged in an interaction with a tablet computer. The results show distinct automation effects in both take-over conditions. With shorter take-over time, decision making and reactions are faster but generally worse in quality.
Article
In a 1935 paper and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this article we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology, and psychology. We emphasize the following points: