Conference PaperPDF Available

I See You! Design Factors for Supporting Pedestrian-AV Interaction at Crosswalks

Authors:
  • MassRobotics
I See You! Design Factors for Supporting Pedestrian-AV
Interaction at Crosswalks
Avram Block
Motional
Boston, Massachusetts, USA
aviblock@msn.com
Aryaman Pandya
Motional
Boston, Massachusetts, USA
aryaman.pandya@motional.com
Seonghee Lee
Cornell
Ithaca, New York, USA
seonghee.lee@motional.com
Paul Schmitt
MassRobotics
Boston, Massachusetts, USA
pauls@massrobotics.org
ABSTRACT
With the advent of autonomous vehicles (AVs) on public roads,
the frequency of interactions between these AVs and pedestrians
will increase. One example of such an interaction is at unsignal-
ized crosswalks, where pedestrians and vehicles must negotiate
for the right of way. Studies show that these interactions often
use social communication channels. This paper addresses how
AVs can ll this communication gap, focusing on the impact of
pedestrian self-identiability. Using VR, we designed two novel
awareness-conveying behaviors, and a control condition with no
awareness behavior. We then conducted a within-subjects VR study
with 19 participants in which they traversed a crosswalk in front
of a driverless vehicle in each experimental condition and rated
their experience across seven probes. Results indicated that an
awareness-conveying behavior signicantly increased pedestrians’
sense of safety and that increases in self-identiability further im-
proved pedestrians’ experience without resulting in a heightened
sense of surveillance from the vehicle.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden
©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9970-8/23/03. . . $15.00
https://doi.org/10.1145/3568294.3580107
CCS CONCEPTS
Human-centered computing
Interaction design; Scenario-
based design; Collaborative and social computing design and eval-
uation methods; Applied computing
Transportation; Com-
puter systems organization Robotics;
KEYWORDS
autonomous vehicles, pedestrian, HCI, eHMI design, social robotics
ACM Reference Format:
Avram Block, Aryaman Pandya, Seonghee Lee, and Paul Schmitt. 2023. I See
You! Design Factors for Supporting Pedestrian-AV Interaction at Crosswalks.
In Companion of the 2023 ACM/IEEE International Conference on Human-
Robot Interaction (HRI ’23 Companion), March 13–16, 2023, Stockholm, Sweden.
ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3568294.3580107
1 INTRODUCTION
Traditional interactions between pedestrians and humans driving
cars are rife with social exchanges, communicated through gesture,
eye contact, and body language [
7
], [
12
]. These interactions signal
to both parties that the other is aware of them and their intentions,
and will plan accordingly. They are integral to the smooth function-
ing of societies in which pedestrians and vehicles operate in close
proximity. One prime example of a site for this sort of interaction
is at an unsignalized crosswalk, in which neither party receives
explicit cues about who has the right of way. Instead, this question
is answered by the collective judgement of those present.
As autonomous vehicles are launched onto public roads around
the world, they are inevitably bound to nd themselves in situations
similar to those described above. However, the key communication
channel that exists between pedestrian and human driver will be
severed, and something must take its place. Signicant research has
HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden Block and Schmi, et al.
Figure 1: Depiction of each of the three experimental conditions. From left to right: C1 Control Condition, C2 Static
Condition, C3 Tracking Condition
been conducted on this topic, with many important and constructive
ndings. The dominant varieties of solutions include expressive
movement dynamics of the AV [16] [3], audio cues [11], and most
commonly, external graphical displays (eHMIs) [
11
], [
5
], [
8
]. While
movement dynamics and auditory cues appear to be useful methods
for supporting these interactions. Our work focuses on the use of
external displays mounted on the AV.
Within the study of the use of external displays, there are mul-
tiple approaches that have been suggested. The primary options
that have been identied within this space are to use either text-
based [
6
], [
15
], graphical/icon-based displays [
1
], [
9
], or expressive
animation of 1-dimensional LED light strips [
4
], [
13
]. Some have
even studied the use of augmented reality for this purpose [
17
]
Research suggests that graphical displays are the most eective
of these three, in terms of legibility and visual salience. Thus, our
work continues in this direction and explores factors to consider in
the design of the graphical display to be presented on an eHMI for
supporting pedestrian-AV interaction at crosswalks.
2 METHODS
The primary purpose of this study was to determine the overall
viability of our design solutions to the problem of AV-pedestrian
communication at road crossings, given the constraints suggested
by the related work described above. Due to the logistically com-
plex nature of our designs, we decided to avoid implementing and
studying them using a physical prototype on a real AV. Instead, we
conducted the experiment in Virtual Reality. This decision imposed
limitations on the research as well, in terms of the complexity of
the environment, and the immersiveness of the scenario. For these
reasons, we relied on self-reported survey ratings as the primary
output of the experiment. These survey questions were designed to
characterize the participants’ sense of safety and comfort through-
out the scenario, with respect to the road crossing and the proximity
to an AV.
2.1 Study Design
This study used a within-subjects design, in which each participant
was presented with each of three experimental conditions exactly
once, in counter-balanced random order. The three experimental
conditions (shown in Figure 1) are as follows:
C1 - Control Condition: eHMI is not used, and remains o for
the duration of the scenario
C2 - Static Condition: eHMI turns on when participant comes
within “trigger” range (2m), and displays human-like gure which
does not move at all throughout the scenario
C3 - Tracking Condition: eHMI turns on when participant
comes within “trigger” range (2m), and displays human-like gure
which moves according to pedestrian’s relative position as they
traverse crosswalk
2.2 Participants
For this study, we recruited a total of 19 participants, a mix of
both internal (75%) and external (25%) participants with respect to
employment at an AV company. Our sample population was 20%
Female, 80% Male, and represented an age range from 22 - 50. A
vast majority of participants had never experienced Virtual Reality
before, while some had used it occasionally, and very few had used
a VR headset semi-regularly.
2.3 Scenario Setup
2.3.1 VR Tools. In order to assess the viability of our proposed
design, we decided to carry out this research in Virtual Reality,
which allowed for more accurate representation of the intended
designs than would have been possible with a prototype eHMI.
Additionally, the technological development required for C3, in
which the display follows the participant’s movement across the
crosswalk, is nearly negligible in VR, where the ground truth of
the participant’s location relative to the AV is always known. For
the scenarios in this experiment, we worked with a 3D graphics
consulting group to use Unreal Engine to design the environment,
and to optimize for the Meta Quest 2 as our VR headset equipment.
2.3.2 Environment. Using these tools, we designed the virtual, ex-
perimental environment. Because this research focuses on mini-
mizing pedestrian uncertainty in ambiguous roadway interactions
with AVs, we chose an unsignalized four-way stop. The lack of
trac light in such an intersection requires right-of-way negotia-
tion between the agents present. In our experimental environment,
participants began by standing a few feet back from the curb at
one corner of the intersection, facing across one of its entrances. At
this closest entrance to the intersection, a generic next-generation
AV sat stationary. This vehicle, pictured above contained multiple
I See You! Design Factors for Supporting Pedestrian-AV Interaction at Crosswalks HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden
passengers, arranged such that the lack of a driver or driver’s seat
was visible upon inspection. It made varying use of its eHMI based
on experimental condition, but did not physically move at any point
in the study. This environmental setup was intended to replicate the
common pedestrian experience of arriving at a four-way stop after
the vehicle whose path intersects that of the pedestrian, thereby
creating the need for a negotiation of which party will proceed in
front of the other.
2.4 Procedure
2.4.1 Preliminary Stage. The procedure for this study began with
the researcher explaining to the participant that the experiment
revolves around pedestrian experience while crossing in front of a
vehicle at an unsignalized roadway crossing. No explicit attempt
was made to draw participants’ attention to the eHMI on the vehicle,
or to the fact that the vehicle was driverless. Participants were then
asked preliminary rating questions, such as self-reported familiarity
with autonomous vehicles, risk-propensity when crossing the street,
and importance of their digital privacy.
2.4.2 Practice Stage. The VR procedure for this study made use of
the Oculus Quest 2’s motion tracking capabilities, and asked partic-
ipants to physically walk 20 feet in a straight line while wearing the
headset, in order to traverse the virtual crosswalk. Even for those
from our sample who were more accustomed to Virtual Reality, this
was a novel experience. Thus, the VR-based portion of the proce-
dure began by presenting participants with a “practice” version of
the environment, in which no vehicles were present. Participants
were asked to walk back and forth across the virtual crosswalk until
they felt comfortable with this interaction style between the virtual
and physical worlds. The decision to use real-world walking for
motion control in VR was intended to create a heightened sense
of immersion, a higher level of realism with respect to pedestrian
motion dynamics, and a lower sense of motion sickness than can
often be experienced when moving through use of a joystick [
14
].
This practice session allowed participants to focus primarily on
the virtual environment during the actual study, rather than being
distracted by concerns about the safety of their movements in the
real world.
2.4.3 Experimental Conditions. Once they felt comfortable in the
practice environment, participants were presented with each of the
experimental conditions. Regardless of the condition, participants
were asked to place the headset on, orient to their virtual surround-
ings at the intersection, and walk towards their target location
across the street. This required them to cross in front of the AV that
was stopped at the intersection. After completing one traversal,
they were asked to turn around and walk back to their starting
position. During this entire process, participants were also asked to
conduct a think-aloud exercise [
19
], in which they described every-
thing they were thinking and witnessing while in the environment.
These think-aloud narrations were recorded and transcribed for
qualitative insights.
2.5 Measures
After experiencing each experimental condition, participants were
asked a series of rating and Likert-style questions, all using a 5-point
scale:
Likert:
L1: I felt safe crossing the street
L2: I felt comfortable crossing the street
L3: I felt safe around the vehicle in this scenario
L4: I felt comfortable around the vehicle in this scenario
L5: I understood the vehicle’s intentions
Rating:
R1: Please rate the vehicle’s intelligence on a scale from 1-5
R2: Please rate the vehicle’s creepiness on a scale from 1-5
After having experienced each condition and responded to each
query item, participants were invited to engage in a more informal,
qualitative discussion of all three conditions, through the use of
open-ended questions about each condition:
Q1: What did you like about this behavior?
Q2: What did you dislike about this behavior?
Q3: What would you change about this behavior?
2.6 Data Analysis
Likert and rating questions were collected across participants for
each experimental condition. In order to determine signicant
trends in our survey response data, we conducted Wilcoxon tests
between each pair of conditions for each question. In order to deter-
mine whether there was a signicant dierence between all three
conditions, we used the Friedman test on each question. In both
tests, we used a p-value of 0.05 as a threshold to assert signicance.
3 RESULTS AND DISCUSSION
3.1 eHMI Benet
Our results (displayed in Figure 2) on probes L1 and L2 (“I felt safe
crossing the street”, and “I felt comfortable crossing the street”),
support the extensive body of pre-existing work that suggests that
any use of an eHMI to communicate AV awareness at uncertain
or unsignalized signicantly improves pedestrians’ subjective ex-
perience of these negotiations [
11
], [
10
]. There are signicant dif-
ferences between C1 and both C2 and C3, although no signicant
dierences between C2 and C3 on this probe.
3.2 Static vs. Tracking Display
3.2.1 Continuous Feedback. Beyond this result, we also nd sig-
nicant improvements from C2 to C3 with respect to pedestrians’
overall impression of the AV itself. This is reected across probe
results addressing pedestrians’ sense of safety and comfort around
the vehicle (L3 and L4), as well as their impression of the vehicle’s
intelligence. In considering our newfound data in conjunction with
previous work done in this area, we believe that this dierence
may be due in part to the ways that C3, in which the human gure
follows the pedestrian’s location, addresses some of the concerns
that previous researchers and users have expressed regarding static
HMI displays such as C2. One of these concerns involves continued
or repeated feedback over the course of the interaction between
AV and pedestrian. Most eHMI solutions for supporting crossing
HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden Block and Schmi, et al.
Figure 2: Survey Results Across Experimental Conditions and Probes. Dotted lines indicate mean values for each probe in each
condition, and CX - CX labels beneath each probe indicate statistically signicant dierences in results.
decisions contain a static visual, which simply turns on when a
pedestrian or AV arrives at the intersection, and remains on and
constant until the end of the negotiation. This allows for a single
instance of reactive communication from AV to pedestrian, but does
not create a feedback loop which is responsive to the unfolding
events of the road crossing. Previous research has shown the value
of this feedback [
5
], [
2
], and comments from participants such as
P15, who said of C3: “I like it because of the constant feedback”,
support that this aspect of the tracking condition provided valued
feedback to pedestrians.
3.2.2 Recipient Ambiguity. The other well-documented concern
with the use of eHMIs for this purpose is ambiguity when con-
fronted with multiple pedestrians [
5
], [
9
], [
18
]. Our experiment
was constrained to a single pedestrian, due to limitations with the
use of VR for multiple simultaneous participants. However, the
design put forth in C3 provides a more straightforward basis from
which to address the problem posed by multiple pedestrians, and
simultaneously assuages fears associated with this problem. With a
static, binary display, such as presented in C2, it is unclear how an
AV would eectively communicate recognition of a group of pedes-
trians, with dierent motion patterns and intentions. The state of
the display being either on or o is intended to communicate with
all those present at once. In fact, despite being alone in the VR sce-
narios, many participants immediately expressed concerns about
C2’s risk of causing confusion between multiple pedestrians, due
to an inability to determine if the display was “meant” for them,
or whether they had misinterpreted a signal intended for someone
else in the scene. P9, when asked about his impression of C2, stated
“I don’t know what it’s looking at. If it was looking at someone else,
I wouldn’t know and might falsely trust it.
On the other hand, qualitative results showed that C3 was a
vast improvement over the static display of C2 in this vein. While
performing the thinkaloud task during C3, many participants con-
dently recognized that the gure on the eHMI was moving along
with their movements in the crosswalk, and noted that “that’s me”
that the car was perceiving. Further movement in the vicinity of
the vehicle only served to conrm this suspicion, thereby removing
any uncertainty about whether the display was, in fact, intended
for them. In the course of this design research, we began to believe
that this self-identiability is a key component of an eective eHMI
for pedestrian-AV interaction. The combination of the movement
and humanlike form of the display in C3 led to high reports of
self-identiability, which coincided with elevated impressions of
the AV’s intelligence, and a heightened sense of comfort around
the vehicle. While the problem of limited real estate in which to
display multiple gures remains relevant, we propose that the use
of larger, wraparound eHMI in conjunction with an appropriate
“trigger” radius will minimize the impact of this issue.
When pursuing a higher degree of self-identiability in the dis-
play, we also wanted to ensure that pedestrians’ sense of privacy
was not infringed upon. Thus, we asked participants to rate their
impression of the AV’s “creepiness” on a scale from one to ve. Our
hypothesis was that the increased sense of safety brought on by
C3 might be accompanied by an increase in perceived creepiness.
Instead, we found no signicant dierences in perceived creepi-
ness across C1, C2, and C3. While we cannot conclude from this
nding that self-identiability is not correlated with loss of a sense
of privacy, we do take it as a suggestion that there is more room
to increase self-identiability (discussed in Future Work) before
these risks are realized and become detrimental to these kinds of
interactions.
4 FUTURE WORK AND CONCLUSION
In the research described above, we conducted an experiment on
the design of an eHMI display for autonomous vehicles. The use
case inspected here was to signal to pedestrians that the AV is aware
of their presence at a crosswalk, and that they are safe to proceed
across. Through our research, we determined that one particularly
important factor in designing for this task eectively is ensuring a
high degree of “self-identiability”, that is, the ability for individual
pedestrians to recognize themselves reected on the eHMI display.
In the most successful design put forth in this research, wee achieve
I See You! Design Factors for Supporting Pedestrian-AV Interaction at Crosswalks HRI ’23 Companion, March 13–16, 2023, Stockholm, Sweden
a certain level of self-identiability by using a human gure as the
graphic, and by programming this gure to move along the eHMI
in tandem with the intended pedestrian’s movement across the
crosswalk. We found that this design led to an increased sense of
safety, comfort, and intelligence with respect to the autonomous
vehicle.
Future extensions of this experimental design should increase
the complexity of the scenario by introducing moving vehicles in
order to raise the stakes of the negotiation, and by increasing the
number of pedestrians present, in order to assess the scalability of
proposed solutions. In future iterations of the designs presented
herein, we take inspiration from some the feedback of some of
our participants. There is clear potential to further increase the
self-identiability aspect of the designs by reecting features such
as a pedestrian’s state (e.g., standing vs. walking), as well as more
specic attributes of each pedestrian (e.g., gender, adult vs. child,
ambulatory vs. wheelchair-using). These suggestions have the po-
tential to address some of the concerns raised in our discussion,
as they further decrease ambiguity for each individual pedestrian
present in the vicinity of an AV.
Overall, we conclude that the use of eHMIs for assisting in
pedestrian-AV social negotiation is a rich eld of inquiry, with the
potential to signicant improve the public experience of integrating
AVs into pedestrian-dense societies.
ACKNOWLEDGMENTS
This work is supported by leaders in the AV industry. We’d also
like to thank our friend Karen Zhang, for getting this project o
the ground, and Malte Jung for providing invaluable feedback.
REFERENCES
[1]
Michael P. Clamann, Miles C. Aubert, and Mary L. Cummings. 2017. Evaluation
of Vehicle-to-Pedestrian Communication Displays for Autonomous Vehicles.
[2]
Mark Colley, Jan Henry Belz, and Enrico Rukzio. 2021. Investigating the Eects
of Feedback Communication of Autonomous Vehicles. In 13th International Con-
ference on Automotive User Interfaces and Interactive Vehicular Applications (Leeds,
United Kingdom) (AutomotiveUI ’21). Association for Computing Machinery, New
York, NY, USA, 263–273. https://doi.org/10.1145/3409118.3475133
[3]
Hatice Şahin, Kevin Daudrich, Heiko Müller, and Susanne CJ Boll. 2021. Signaling
Yielding Intent with EHMIs: The Timing Determines an Ecient Crossing. In 13th
International Conference on Automotive User Interfaces and Interactive Vehicular
Applications (Leeds, United Kingdom) (AutomotiveUI ’21 Adjunct). Association
for Computing Machinery, New York, NY, USA, 5–9. https://doi.org/10.1145/
3473682.3480253
[4]
Koen de Clercq, Andre Dietrich, Juan Pablo Núñez Velasco, Joost de Win-
ter, and Riender Happee. 2019. External Human-Machine Interfaces on
Automated Vehicles: Eects on Pedestrian Crossing Decisions. Human
Factors 61, 8 (2019), 1353–1370. https://doi.org/10.1177/0018720819836343
arXiv:https://doi.org/10.1177/0018720819836343 PMID: 30912985.
[5]
Debargha Dey, Kai Holländer, Melanie Berger, Berry Eggen, Marieke Martens,
Bastian Peging, and Jacques Terken. 2020. Distance-Dependent EHMIs for the
Interaction Between Automated Vehicles and Pedestrians. In 12th International
Conference on Automotive User Interfaces and Interactive Vehicular Applications
(Virtual Event, DC, USA) (AutomotiveUI ’20). Association for Computing Machin-
ery, New York, NY, USA, 192–204. https://doi.org/10.1145/3409120.3410642
[6]
Lex Fridman, Bruce Mehler, Lei Xia, Yangyang Yang, Laura Yvonne Facusse, and
Bryan Reimer. 2017. To Walk or Not to Walk: Crowdsourced Assessment of
External Vehicle-to-Pedestrian Displays. https://doi.org/10.48550/ARXIV.1707.
02698
[7]
Nicolas Guéguen, Sébastien Meineri, and Chloé Eyssartier. 2015. A pedestrian’s
stare and drivers’ stopping behavior: A eld experiment at the pedestrian crossing.
Safety Science 75 (2015), 87–89. https://doi.org/10.1016/j.ssci.2015.01.018
[8]
Kai Holländer, Ashley Colley, Christian Mai, Jonna Häkkilä, Florian Alt, and
Bastian Peging. 2019. Investigating the Inuence of External Car Displays
on Pedestrians’ Crossing Behavior in Virtual Reality. In Proceedings of the 21st
International Conference on Human-Computer Interaction with Mobile Devices and
Services (Taipei, Taiwan) (MobileHCI ’19). Association for Computing Machinery,
New York,N Y,USA, Article 27, 11 pages. https://doi.org/10.1145/3338286.3340138
[9]
Philip Joisten, Ziyu Liu, Nina Theobald, Andreas Webler, and Bettina Abendroth.
2021. Communication of Automated Vehicles and Pedestrian Groups: An Intercul-
tural Study on Pedestrians’ Street Crossing Decisions. In Proceedings of Mensch
Und Computer 2021 (Ingolstadt, Germany) (MuC ’21). Association for Computing
Machinery, New York, NY, USA, 49–53. https://doi.org/10.1145/3473856.3474004
[10] Stefanie M. Faas, Johannes Kraus, Alexander Schoenhals, and Martin Baumann.
2021. Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Dis-
play in an External HMI Support Trust Calibration and Safe Crossing Behavior?.
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
(Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York,
NY, USA, Article 157, 17 pages. https://doi.org/10.1145/3411764.3445738
[11]
Karthik Mahadevan, Sowmya Somanath, and Ehud Sharlin. 2018. Can Interfaces
Facilitate Communication in Autonomous Vehicle-Pedestrian Interaction?. In
Companion of the 2018 ACM/IEEE International Conference on Human-Robot Inter-
action (Chicago, IL, USA) (HRI ’18). Association for Computing Machinery, New
York, NY, USA, 309–310. https://doi.org/10.1145/3173386.3176909
[12]
Amir Rasouli, Iuliia Kotseruba, and John K. Tsotsos. 2017. Agreeing to Cross:
How Drivers and Pedestrians Communicate. https://doi.org/10.48550/ARXIV.
1702.03555
[13]
Alexandria I. Rossi-Alvarez, Kevin Grove, Charlie Klauer, Melissa Miles, Andy
Schaudt, and Zachary Doerzaph. 2022. Impact of Highly Automated Vehicle (L4/5
AV) External Communication on Other Road User Behavior. https://rosap.ntl.
bts.gov/view/dot/64678 Tech Report.
[14]
Dimitrios Saredakis, Ancret Szpak, Brandon Birckhead, Hannah A. D. Keage,
Albert Rizzo, and Tobias Loetscher. 2020. Factors Associated With Virtual Reality
Sickness in Head-Mounted Displays: A Systematic Review and Meta-Analysis.
Frontiers in Human Neuroscience 14 (2020). https://doi.org/10.3389/fnhum.2020.
00096
[15]
Anna Schieben, Marc Wilbrink, Carmen Kettwich, Ruth Madigan, Tyron Louw,
and Natasha Merat. 2019. Designing the Interaction of Automated Vehicles with
Other Trac Participants: Design Considerations Based on Human Needs and
Expectations. Cogn. Technol. Work 21, 1 (feb 2019), 69–85. https://doi.org/10.
1007/s10111-018- 0521-z
[16]
Paul Schmitt, Nicholas Britten, JiHyun Jeong, Amelia Coey, Kevin Clark,
Shweta Sunil Kothawade, Elena Corina Grigore, Adam Khaw, Christopher
Konopka, Linh Pham, Kim Ryan, Christopher Schmitt, and Emilio Frazzoli. 2022.
Can Cars Gesture? A Case for Expressive Behavior Within Autonomous Vehicle
and Pedestrian Interactions. IEEE Robotics and Automation Letters 7, 2 (2022),
1416–1423. https://doi.org/10.1109/LRA.2021.3138161
[17]
Wilbert Tabone, Yee Mun Lee, Natasha Merat, Riender Happee, and Joost de
Winter. 2021. Towards Future Pedestrian-Vehicle Interactions: Introducing
Theoretically-Supported AR Prototypes. In 13th International Conference on Au-
tomotive User Interfaces and Interactive Vehicular Applications (Leeds, United
Kingdom) (AutomotiveUI ’21). Association for Computing Machinery, New York,
NY, USA, 209–218. https://doi.org/10.1145/3409118.3475149
[18]
Marc Wilbrink, Manja Nuttelmann, and Michael Oehl. 2021. Scaling up Auto-
mated Vehicles’ EHMI Communication Designs to Interactions with Multiple
Pedestrians Putting EHMIs to the Test. In 13th International Conference on
Automotive User Interfaces and Interactive Vehicular Applications (Leeds, United
Kingdom) (AutomotiveUI ’21 Adjunct). Association for Computing Machinery,
New York, NY, USA, 119–122. https://doi.org/10.1145/3473682.3480277
[19]
Xuesong Zhang and Adalberto L. Simeone. 2022. Using the Think Aloud Protocol
in an Immersive Virtual Reality Evaluation of a Virtual Twin. In Proceedings of
the 2022 ACM Symposium on Spatial User Interaction (Online, CA, USA) (SUI ’22).
Association for Computing Machinery, New York, NY, USA, Article 13, 8 pages.
https://doi.org/10.1145/3565970.3567706
Received 6 December 2022; accepted 11 January 2023
... Other studies have delved deeper into the visual media, studying the pragmatic effectiveness of specific technologies such as LED light strips and digital screens that produce patterns on the windshield and elsewhere on the exterior of the vehicle [11] [12] [13]. Habibovic et al. found that the use of an external interface significantly increased the likelihood of a positive experience and improved perceived safety in pedestrian encounters with AVs [14]. ...
... 3) Complex Interaction Scenarios: Much of the current HRI literature is focused on one-to-one interaction between a robot and an individual or a small group of users. Applying this perspective would mean modeling and researching relations of one AV to one external road user [7] [13]. However, it is unclear if and how much pedestrian behavior may change when exposed to more realistic urban scenarios involving multiple vehicles and or VRUs simultaneously. ...
... Other studies have delved deeper into the visual media, studying the pragmatic effectiveness of specific technologies such as LED light strips and digital screens that produce patterns on the windshield and elsewhere on the exterior of the vehicle [11] [12] [13]. Habibovic et al. found that the use of an external interface significantly increased the likelihood of a positive experience and improved perceived safety in pedestrian encounters with AVs [14]. ...
... 3) Complex Interaction Scenarios: Much of the current HRI literature is focused on one-to-one interaction between a robot and an individual or a small group of users. Applying this perspective would mean modeling and researching relations of one AV to one external road user [7] [13]. However, it is unclear if and how much pedestrian behavior may change when exposed to more realistic urban scenarios involving multiple vehicles and or VRUs simultaneously. ...
Article
Full-text available
While great strides have been taken in advancing the field of Human-Robot Interaction (HRI), challenges abound in understanding and improving how Autonomous Vehicles (AVs) will interact with and within society. Through this paper, the authors attempt to paint the picture of challenges unique to the study and advancement of interfaces between AVs and vulnerable road users (VRUs). In turn, these gaps in research highlight the opportunities for academia, industry, and public policy to collaborate and advance the state of the art of AV-VRU interaction, and the need for a dedicated forum for sharing insights across these various sectors.
... AVs represent a specifc manifestation of robots and are, therefore, directly relevant to the HRI community (e.g., see [2,40,41,51]). However, the current implementation can also serve as a basis for including simulated robots in communication with pedestrians. ...
Conference Paper
Full-text available
As automated vehicles become more widespread but lack a driver to communicate in uncertain situations, external communication, for example, via LEDs or displays, is evaluated. However, the concepts are mostly evaluated in simple scenarios, such as one person trying to cross in front of one automated vehicle. The traditional empirical approach fails to study the large-scale effects of these in this not-yet-real scenario. Therefore, we built PedSUMO, an enhancement to SUMO for the simulacra of automated vehicles' effects on public traffic, specifically how pedestrian attributes affect their respect for automated vehicle priority at unprioritized crossings. We explain the algorithms used and the derived parameters relevant to the crossing. We open-source our code under https://github.com/M-Colley/pedsumo and demonstrate an initial data collection and analysis of Ingolstadt, Germany.
Conference Paper
Full-text available
While great strides have been taken in advancing the field of Human-Robot Interaction (HRI), challenges abound in understanding and improving how Autonomous Vehicles (AVs) will interact with and within society. Through this paper, the authors attempt to paint the picture of challenges unique to the study and advancement of interfaces between AVs and vulnerable road users (VRUs). In turn, these gaps in research highlight the opportunities for academia, industry, and public policy to collaborate and advance the state of the art of AV-VRU interaction, and the need for a dedicated forum for sharing insights across these various sectors.
Article
Full-text available
One of the major challenges that autonomous vehicles (AVs) face in an urban setting is communicating with other road users such as pedestrians. In this work, we investigated with what expressive behaviors we can endow AVs such that pedestrians readily recognize the underlying intent of the vehicles movements. The purpose of our study was to test the impact of expressive stopping behaviors on pedestrians decision to cross a road. We utilized a virtual reality (VR) environment in which participants would have to cross a street in the presence of an oncoming vehicle that may or may not stop. Next, we crafted several expressive AV behaviors conveying its intention to stop for the pedestrian. Then, for each expressive design we recorded how quickly a pedestrian determined that it was safe to cross the street. We also administered repeated surveys of their subjective experiences. Our findings suggest that expressive behaviors such as easing into a full stop or stopping farther away can help pedestrians make quicker decisions to cross the road. Additionally, stopping farther away from the pedestrian also resulted in higher subjective experience for sense of safety, confidence, and intention understanding. We propose further investigation into expressive behaviors such as easing into a stop and stopping farther away to convey yielding intentions to pedestrians in future work. As a contribution to the field, all VR files used in this research are being open sourced at https://nureality.org.
Conference Paper
Full-text available
The future urban environment may consist of mixed traffic in which pedestrians interact with automated vehicles (AVs). However, it is still unclear how AVs should communicate their intentions to pedestrians. Augmented reality (AR) technology could transform the future of interactions between pedestrians and AVs by offering targeted and individualized communication. This paper presents nine prototypes of AR concepts for pedestrian-AV interaction that are implemented and demonstrated in a real crossing environment. Each concept was based on expert perspectives and designed using theoretically-informed brainstorming sessions. Prototypes were implemented in Unity MARS and subsequently tested on an un-marked road using a standalone iPad Pro with LiDAR functionality. Despite the limitations of the technology, this paper offers an indication of how future AR systems may support future pedestrian-AV interactions.
Conference Paper
Full-text available
While different studies have investigated the different types of signals to communicate that an automated vehicle will or will not yield, the question of "when" signaling intent is contributing best to cooperation is still open. We conducted a study investigating the effects of eHMIs' signaling yielding intent to pedestrians at different times relative to the vehicles' deceleration maneuver. We realized this scenario by means of a video game in which pedestrians need to safely cross the street. Our results show that signaling yielding intent with eHMIs before or simultaneously with a deceleration maneuver significantly shortens participant's street crossing onsets while later presentation of yielding intent did not perform better than not presenting it at all. Our study suggests signaling yielding intent on the eHMI before or during the deceleration of the vehicles supports an efficient crossing of the pedestrian. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; Laboratory experiments; Empirical studies in interaction design .
Article
Full-text available
The use of head-mounted displays (HMD) for virtual reality (VR) application-based purposes including therapy, rehabilitation, and training is increasing. Despite advancements in VR technologies, many users still experience sickness symptoms. VR sickness may be influenced by technological differences within HMDs such as resolution and refresh rate, however, VR content also plays a significant role. The primary objective of this systematic review and meta-analysis was to examine the literature on HMDs that report Simulator Sickness Questionnaire (SSQ) scores to determine the impact of content. User factors associated with VR sickness were also examined. A systematic search was conducted according to PRISMA guidelines. Fifty-five articles met inclusion criteria, representing 3,016 participants (mean age range 19.5–80; 41% female). Findings show gaming content recorded the highest total SSQ mean 34.26 (95%CI 29.57–38.95). VR sickness profiles were also influenced by visual stimulation, locomotion and exposure times. Older samples (mean age ≥35 years) scored significantly lower total SSQ means than younger samples, however, these findings are based on a small evidence base as a limited number of studies included older users. No sex differences were found. Across all types of content, the pooled total SSQ mean was relatively high 28.00 (95%CI 24.66–31.35) compared with recommended SSQ cut-off scores. These findings are of relevance for informing future research and the application of VR in different contexts.
Conference Paper
To ensure safety in future mixed traffic, automated vehicles (AV) will have to interact in an understandable way with other, more vulnerable road users (VRU). Current research has shown positive effects on the AV-VRU interaction for external human-machine interfaces (eHMI). The paper presents preliminary results of an online video study, focusing on the evaluation of different eHMI designs in situations with multiple pedestrians. Three eHMI communication designs using a 360° LED-light band and different combinations of additional pedestrians’ positionings were analyzed. The main goal of this study was to investigate how the communication designs were interpreted in situations with multiple pedestrians and whether negative effects may arise through their presence. Preliminary results showed that participants crossed the street more often, experienced more certainty in their decision, and felt more addressed if they were interacting with an AV with intention-based communication strategy compared to a static eHMI or no eHMI.
Conference Paper
Policymakers recommend that automated vehicles (AVs) display their automated driving status using an external human-machine interface (eHMI). However, previous studies suggest that a status eHMI is associated with overtrust, which might be overcome by an additional yielding intent message. We conducted a video-based laboratory study (N = 67) to investigate pedestrians’ trust and crossing behavior in repeated encounters with AVs. In a 2x2 between-subjects design, we investigated (1) the occurrence of a malfunction (AV failing to yield) and (2) system transparency (status eHMI vs. status+intent eHMI). Results show that during initial encounters, trust gradually increases and crossing onset time decreases. After a malfunction, trust declines but recovers quickly. In the status eHMI group, trust was reduced more, and participants showed 7.3 times higher odds of colliding with the AV as compared to the status+intent group. We conclude that a status eHMI can cause pedestrians to overtrust AVs and advocate additional intent messages.
Conference Paper
External human-machine interfaces (eHMIs) support automated vehicles (AVs) in interacting with vulnerable road users such as pedestrians. While related work investigated various eHMIs concepts, these concepts communicate their message in one go at a single point in time. There are no empirical insights yet whether distance-dependent multi-step information that provides additional context as the vehicle approaches a pedestrian can increase the user experience. We conducted a video-based study (N=24) with an eHMI concept that offers pedestrians information about the vehicle's intent without providing any further context information, and compared it with two novel eHMI concepts that provide additional information when approaching the pedestrian. Results show that additional distance-based information on eHMIs for yielding vehicles enhances pedestrians' comprehension of the vehicle's intention and increases their willingness to cross. This insight posits the importance of distance-dependent information in the development of eHMIs to enhance the usability, acceptance, and safety of AVs.