Conference PaperPDF Available

¡Vamos!: Observations of Pedestrian Interactions with Driverless Cars in Mexico

Authors:

Abstract and Figures

How will pedestrians from different regions interact with an approaching autonomous vehicle? Understanding differences in pedestrian culture and responses can help inform autonomous cars how to behave appropriately in different regional contexts. We conducted a field study comparing the behavioral response of pedestrians between metropolitan Mexico City (N=113) and Colima, a smaller coastal city (N=81). We hid a driver in a car seat costume as a Wizard-of-Oz prototype to evoke pedestrian interaction behavior at a crosswalk or street. Pedestrian interactions were coded for crossing decision, crossing pathway, pacing, and observational behavior. Most distinctly, pedestrians in Mexico City kept their pace and more often crossed in front of the vehicle, while those in Colima stopped in front of the car more often.
Content may be subject to copyright.
¡Vamos! Observations of Pedestrian
Interactions with Driverless Cars in Mexico
Rebecca Currano
Stanford University
Stanford, CA, USA
bcurrano@stanford.edu
So Yeon Park
Stanford University
Stanford, CA, USA
syjpark@stanford.edu
Lawrence Domingo
Stanford University
Stanford, CA, USA
ldomingo@stanford.edu
Jesus Garcia-Mancilla
ITAM
Mexico City, Mexico
research@jgmancilla.com
Pedro C. Santana-Mancilla
University of Colima
Colima, Mexico
psantana@ucol.mx
Victor M. Gonzalez
ITAM
Mexico City, Mexico
victor.gonzalez@itam.mx
Wendy Ju
Cornell Tech
New York City, NY, USA
wendyju@cornell.edu
ABSTRACT
How will pedestrians from different regions interact with an ap-
proaching autonomous vehicle? Understanding differences in
pedestrian culture and responses can help inform autonomous
cars how to behave appropriately in different regional contexts.
We conducted a field study comparing the behavioral response
of pedestrians between metropolitan Mexico City (N=113)
and Colima, a smaller coastal city (N=81). We hid a driver
in a car seat costume as a Wizard-of-Oz prototype to evoke
pedestrian interaction behavior at a crosswalk or street. Pedes-
trian interactions were coded for crossing decision, crossing
pathway, pacing, and observational behavior. Most distinctly,
pedestrians in Mexico City kept their pace and more often
crossed in front of the vehicle, while those in Colima stopped
in front of the car more often.
Author Keywords
Pedestrian interaction; Regional differences; Ghostdriver;
Mexico; Wizard of Oz.
CCS Concepts
Human-centered computing Empirical studies in in-
teraction design; Systems and tools for interaction design;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
Automotive’UI 18 September 23 - 25, 2018, Toronto, Canada
© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ISBN 978-1-4503-5946-7/18/09. . . $15.00
DOI: 10.1145/3239060.3241680
INTRODUCTION
In September 2016, Raffi Krikorian, who headed Uber’s re-
search center in Pittsburgh, told The Economist that if Uber
could master autonomous driving in Pittsburgh, Uber could
make it almost anywhere [7]. However, our experiences sug-
gest that people’s interactions with traffic vary widely, country
to country, even neighborhood to neighborhood. Contrary
to Krikorian’s assertion, we believe that the advent of au-
tonomous vehicles will necessitate a finer grained understand-
ing of differences in driving norms worldwide.
The importance of understanding local “road culture and
norms” cannot be overstated. There are early indications
that autonomous vehicles may cause minor accidents through
anomalous driving behavior, even if the autonomous vehicle
is following the letter of the law and not at fault. For example,
Google’s June 2015 accident report for its self-driving vehicle
fleet indicated that they suffered five minor accidents while
driving 200,000 miles, nearly ten times what the National
Highway Traffic Safety Administration reports as the national
average for “property only” fender benders [13]. While it is
important that fully autonomous cars always use the safest
known protocol for interacting with other entities, it is not
always clear, given the contingent nature of interaction and
differences in road culture, what the safest behavior is.
In this paper, we investigate the patterns of behavior in in-
teractions between pedestrians and driverless vehicles in two
locations in Mexico: Mexico City and Colima. We used the
Ghostdriver method developed by Rothenbücher et al. where a
hidden driver elicits interactions with pedestrians on the road,
to explore their responses to a seemingly driverless vehicle
[30]. We ran this study in a zone frequented by both cars and
pedestrians in two cities to look for differences in how pedes-
trians in a large versus a small city interact with autonomous
Proceedings of the 10th International ACM Conference on Automotive User Interfaces and Interactive Vehicular
Applications (AutomotiveUI ’18), September 23–25, 2018, Toronto, Canada .
210
Figure 1: A driver hidden in a car seat costume helps to explore
how pedestrians will respond to future driverless cars
cars [25]. This study is the first to specifically look for vari-
ations between cities of different sizes in vehicle-pedestrian
interactions with driverless vehicles.
RELATED WORK
Pedestrian Safety
Mortality rates due to vehicle accidents with pedestrians are
a problem worldwide. In the US, in 2012, 14% of all traffic
fatalities were pedestrian deaths [27]. A traffic study on pedes-
trian injuries in Mexico [14] estimated a crude mortality rate
of 7.14 people per 100,000 residents. Crude mortality rates in
this study were found to differ with region, but little is known
of the causes behind these differences. Pedestrian fatality rates
were similar in some highly urbanized and rural regions in
this study despite differences in traffic volume associated with
urban and rural areas.
Pedestrian vulnerability makes pedestrian safety paramount
for all roads and sidewalks. Gueguen found that establish-
ing eye contact influenced driver’s willingness to stop for
pedestrians [11]. Driver-to-driver and pedestrian-to-driver ac-
knowledgement and communication, through hand gestures or
eye gazes, promotes road safety and safety rules compliance
[12, 19]. The U.S. Department of Transportation recommends
pedestrians “to make eye contact with drivers as drivers ap-
proach the pedestrian to ensure that the pedestrians are seen”
[1]. Autonomous vehicles, however, cannot perform hand
gestures or make eye contact with pedestrians, resulting in a
road interaction behavior script conflict.
Implicit Interaction at the Crosswalk
While pedestrian and vehicle interactions, even in partially
automated vehicles, are mediated through a human driver, the
transition to more fully automated driving systems means
that eventually cars will operate without a driver present.
Prior studies of implicit interaction patterns [17] of pedestrian-
vehicle interaction, such as [26, 29, 35] indicate that, in urban
interactions, movement in context is the central method of
communication for coordination among cars and pedestrians.
In [17], Ju asserts that the pattern observed among pedestrians
giving way to non-autonomous vehicles is the following: a)
pedestrian approaches intersection, b) car approaches intersec-
tion, c) person makes eye contact, d) driver makes eye contact,
e) driver indicates not giving way, f) pedestrian waits, g) driver
moves through crosswalk, h) pedestrian crosses. When the
driver gives way, the pattern is similar, but: e
0
) driver indicates
giving way, f
0
) driver stops and waits, g
0
) pedestrian crosses,
h
0
) driver moves through crosswalk. The absence of a driver
in an autonomous car may cause a pattern divergence in steps
c) and d) when the driver would communicate intent with a
pedestrian.
It is also the case that eye contact is not always used, and is
sometimes not even possible, in interactions with manually
driven vehicles. Such is the case when glare prevents pedestri-
ans from seeing the driver through the windows, or in poorly
lit areas at night, when there is not enough light to see de-
tails of the driver’s or pedestrian’s face. In those cases, other
cues may be used to either communicate or evaluate whether
it is safe to cross in front of the vehicle. Dey et al. found
that eye contact and explicit communication are rare, and that
motion patterns of vehicles play a more significant role for
pedestrians for efficient traffic negotiations [6]. Risto et al.
found a wider variety of signals and patterns in their study of
how conventional vehicles interact with other road users. The
vehicle movement patterns of advancing, slowing early, and
stopping short were more commonly used by vehicles, while
pedestrians used forward motion, hand gesture, head position,
eye gaze, and body posture to signal their own actions [29].
More broadly, Risto, Ju, and others working in this space
all posit that these signals are contextual and cultural. Thus,
the need to investigate the cultural context of signaling with
seemingly autonomous cars and in a variety of environments is
clear. The study described in this paper is the first to investigate
this through the lens of location-related cultural differences.
Autonomous Car Behaviors
Although there is substantial controversy over when fully au-
tonomous vehicles will arrive [31], we can assume that com-
mercial self-driving vehicles will be on our roads within the
next few years [22]. WEpod in the Netherlands, and Uber
and Waymo in North America, already operate autonomous
shuttle buses and cars, respectively, on public roads, along
limited routes [4, 38]. Currently WEpods have a dedicated
operator in a control room to ensure safe operation, and Uber’s
and Waymo’s autonomous cars have vehicle operators in the
driver’s seat, available to intervene as needed.
The industry assumption seems to be that it will be possible
to use algorithmic decision-making models to decide vehicle
action in the face of observed traffic behavior [9, 23], but such
work does not account for the practice of signaling between
drivers and other road users. Some researchers have explored
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
211
ways in which vehicles can indicate that they are in automated
driving mode, or are about to yield [2, 20, 34], but their meth-
ods employ open-loop signals that seek to replace rather than
make use of existing contextual practice and patterns.
As cars become fully autonomous, and hence, driverless, es-
tablished practices like making eye contact or giving hand
signs will no longer be possible. Road users cannot observe
driver head movements, which could indicate that they have
been noticed. Without visible indicators acknowledging their
presence, pedestrians and cyclists may feel unsafe or unsure
of their safety while crossing in front of vehicles. How will
they respond in these situations? Will their behavior change
when crossing in front of a self-driving car compared to a man-
ually driven car? Farber [8] notes that these communications
are likely to be culturally specific; this suggests that examina-
tion of practices and communication patterns in a variety of
cultures and contexts is critical.
Field Experiments with Pedestrian-Driver Interaction
Several research fields, including social sciences, psychology,
and civil engineering, are concerned with questions about the
interaction and behavior of pedestrians and cyclists. These dis-
ciplines focus on different types of data. Social scientists and
psychologists look mainly at the interaction between human
drivers and pedestrians or cyclists, showing, for example, that
a driver’s gaze goes first to the face of a bicyclist and remains
there for longer periods than on other features like hand signs
[36]. Several studies confirm the positive effect of eye contact
on compliant behavior [11, 12, 19]. Transportation planning
and engineering are usually more concerned with the behavior
of pedestrians and cyclists within a given infrastructure [32].
Field research on pedestrian-vehicle interactions often utilizes
confederate pedestrians: researchers posing as pedestrians at
intersections and crosswalks. These researchers often have
the task of inciting pedestrian-automobile interactions. Gue-
gen, for example, studied the effects of staring, and found
that confederate pedestrians who stared at car drivers elicited
greater stopping [11]. Piff et al. had confederate pedestrians
enter crosswalks at strategic times when an automobile was
approaching to test relationships between vehicle class and
traffic law violation, and found that drivers of higher class
vehicles violated traffic laws more frequently than drivers of
lower class vehicles [28].
Confederate drivers are also used in field studies to gather data
on general pedestrian behavior. Llorca [24] had drivers record
pedestrian walking direction and showed that pedestrians ob-
served were three times more likely to walk in the direction
facing traffic rather than with traffic. Though less common,
this methodology is valuable for capturing pedestrian-vehicle
interactions at close proximity and from the perspective of the
car, as we did in the current study.
METHOD
Research Approach
This research was conducted as a generative study to investi-
gate regional differences in interactions between pedestrians
and cars that are perceived to be self-driving. Specifically, we
examine differences between interactions in a large metropoli-
tan city setting vs. a small city setting, both in Mexico.
We ran this study “in the field” in order to elicit responses of
pedestrians in the course of their daily activities, on public
roads, and at typical pedestrian crossing areas. Interactions
were captured through video recordings. The analyses in this
paper focus on observations of pedestrian behavior generated
through video analysis.
Study Design
This study was formulated as a breaching study intended to
illuminate norms in interactions between pedestrians and vehi-
cle at crosswalks and roads. Breaching experiments involve
the conscious exhibition of “unexpected” behavior/violation
of social norms, an observation of the types of social reactions
such behavioral violations engender, and an analysis of the
social structure that makes these reactions possible [10, 39].
The violation of the expectation of a driver behind the wheel of
the vehicle helps us to elicit direct feedback about what types
of communication and feedback are expected in driver-vehicle
interactions, and uncover implications of new aspects of a
design or technology [37].
Because there are regulatory and safety issues involved with us-
ing self-driving cars on public roads, we employ a Ghostdriver
protocol [30] to simulate a self-driving car for our experiment.
Our goal was to evoke the impression that the car is driving
autonomously and to deprive pedestrians and cyclists of any
chance to interact with a human in the car. This Wizard-of-Oz
protocol [3] allows us to anticipate how people will interact
with self-driving vehicles in advance of the technology being
ready for the road; indeed, it allows us to experiment with
interaction styles prior to implementation.
Video recordings from multiple perspectives (inside the car,
on top of the car, and across the street), captured participants’
actions, gestures, and expressions during the interactions.
Participants
In order to elicit naturalistic responses to our manipulation that
were appropriate to the regional context, we engaged partici-
pants who happened to be walking by the study location. At
both study locations, participants were members of the local
university community and others on the university campus–
primarily students, staff, faculty, and university employees, but
also visitors, local residents, and business employees from the
surrounding area. Participants were not compensated for par-
ticipating in the study. We recorded video of 113 pedestrians
in Mexico City (83 male, and 30 female) and 81 pedestrians
in Colima (46 male, and 35 female) who interacted with the
vehicle.
Apparatus and Materials
We simulated an on-road driverless autonomous vehicle by
outfitting a manually driven car with a driver in a car seat
costume. In addition, the vehicle was equipped with a faux
LiDAR on top, and decals on the front and sides, stating (in
Spanish) “Carro Autónomo, Universidad de Colima, ITAM,
Stanford University.” We also instrumented the vehicle with
HD GoPro cameras on the interior and exterior of the car to
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
212
capture interactions with pedestrians from different angles.
Two camcorders were set up on tripods across the street from
the interaction site. For the research vehicle, a VW Vento
Startline was used in Mexico City and a Volvo SUV XC60
was used in Colima. The equipment and setup were otherwise
the same between the two cities. Differences in car models are
addressed in the Limitations section of this paper.
The use of a car seat costume disguise for the driver (Figure
1) has been employed in previous research [30] to simulate
driverless vehicles. We modified and refined Rothenbücher’s
design to create the current costume specifically for the con-
straints and setting of this study. This was originally inspired
by an invisible driver prank published on YouTube [15]. In
this setup, we refer to the driver as “the wizard” since he con-
trolled the vehicle from “behind the curtain,” so to speak, as a
hidden driver.
Locations
We ran this study in November 2016 on the campuses of In-
stituto Tecnológico Autonómo de México (ITAM), in Mexico
City, and the University of Colima, in Colima, Mexico. Mex-
ico City is situated inland on a high plateau, and is the most
populous city in North America with a population of over
8,918,653 in 2015 [42]. Its greater metropolitan area is the
largest in the Northern Hemisphere, with over 20 million resi-
dents in 2014 [40]. In contrast, Colima, is a smaller city, on
the West coast of Mexico, with a population of 137,383 (Mu-
nicipality of Colima: 150,673) in 2010 [41]. Mexico City is an
important urban financial center and the capital of the country,
while Colima is capital of the smallest state in Mexico, and a
popular beach destination, with less than 2% the population of
Mexico City.
While there are undoubtedly numerous other differences, pop-
ulation and geographical situation are among the most notable
differences making these two cities good candidates for explor-
ing potential variation in pedestrian responses to automated
vehicles in different types of cities.
In both cities, there are designated areas for pedestrians to
cross traffic, and jaywalking is prohibited. Additionally, at
entrances to parking lots, it is up to pedestrians and vehicles
to coordinate, and it is the norm for pedestrians to be given
preference. Mexico City has noticeably higher traffic and
more aggressive driving behavior, and pedestrians tend to be
more cautious when crossing roads and while approaching
entrances/exits of parking lots.
In each location, we selected a parking lot on the university
campus with an entrance frequently crossed by pedestrian traf-
fic, and a nearby route allowing the driver to easily return after
leaving the parking lot. The study was conducted from about
11am-1pm and 3pm-5pm at both locations. Urban settings
were selected as most pedestrian accidents occur in urban ar-
eas, and a crosswalk was selected since road crossing is the
most frequent event in pedestrian accidents [1].
The first location (Figure 2a) was a parking lot at the main
entrance on the ITAM campus. The entrance of the parking
lot crosses a sidewalk frequented by pedestrians coming and
going between the University and a separate, adjacent parking
lot, or between the University and nearby restaurants. The
entrance to the parking lot used in the study was flanked by a
high wall on both sides, and attendants at the entrance helped
to direct vehicle traffic at times, as needed. Pedestrian traffic
is high especially at the start and end of the work day, and
during the lunch hour.
The second location (Figure 2b) was a parking lot at the nurs-
ing department on the University of Colima campus. The
entrance to the parking lot is flanked by a high fence on both
sides, which both drivers and pedestrians can see through, and
is controlled by a manually operated gate. The parking atten-
dant opens the gate whenever a vehicle approaches to enter or
exit the parking lot. Additionally, the return route in Colima
allowed us to capture interactions with pedestrians crossing
the street near the parking lot as well as those crossing the
sidewalk at the parking lot entrance.
Neither parking lot was controlled by traffic lights, and in
both cases, parking lot attendants directed vehicles, allowing
vehicles to enter or exit, but rarely mediated between vehicles
and pedestrians. No pedestrian decisions mediated by parking
attendants, other vehicles, or other agents were included in our
analysis.
A fixed vehicle route was chosen at each location such that
the vehicle could access the campus parking lots repeatedly
with ease to prompt pedestrian interactions. The majority of
the interactions, and consequently the interviews, occurred in
the areas marked by blue circles or ellipses (Figure 3).
Procedure
To prompt interactions, the wizard drove the car on a fixed
course in and out of a campus parking lot. Whenever the
car approached a pedestrian at a crossing area (either at the
parking lot entrance or on the street), the car would slow, stop,
and wait, allowing the pedestrian to decide whether to cross.
To coordinate the field experiment so that vehicles were more
likely to arrive at the intersection just as people did, the driver,
coordinator, and study interviewers coordinated actions using
walkie-talkies. This communication channel also allowed the
driver to request assistance, for example, to discourage curious
onlookers from approaching close enough to detect the hidden
driver.
We trained the wizard to drive in a manner suggestive of how
an automated system might drive: slowly and cautiously, in a
consistent driving style and steering primarily from the bottom
of the steering wheel to draw less attention to his hands. In
all other respects, the wizard drove as he normally would, and
had full control of the car (steering, braking, navigating) and
full use of his sensory capabilities (sight, hearing, etc.). The
wizard driver used the turn signal to indicate upcoming turn
actions to pedestrians and nearby vehicles.
In order to facilitate interaction timing, researchers on the
street radioed the wizard as pedestrians were approaching.
Even so, timing was a challenge, due to the presence of
other vehicles and officials directing traffic or controlling en-
trance/exit from the parking lot. So we opted to obtain more
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
213
(a) Mexico City (b) Colima
Figure 2: Images of the outfitted Ghostdriver vehicle at study intersections.
(a) Mexico City (b) Colima
Figure 3: Vehicle study routes at both locations. The blue regions indicate high traffic and interaction regions.
interactions by increasing the frequency of entering and exiting
the parking lot in addition to timing.
Data
Video recordings from multiple perspectives (inside the car,
on top of the car, and across the street), captured participants’
actions, gestures, and expressions during the interactions. In-
teraction analysis based on these videos were coded to under-
stand how drivers and vehicles interacted [16, 25].
Following each interaction, we also conducted short interviews
with participants to ask what they noticed and believed about
the car, how they decided whether to cross, and to gather basic
demographic information. Interviews were conducted only
with those over 18 years of age and who consented to partici-
pate in the study. We audio recorded interviews with consent
of the participants. As the qualitative data from the interviews
is substantial, additional analysis from those interviews will
be presented in a future research publication.
ANALYSIS
Classification
For the purpose of analysis, interactions were defined as situa-
tions where the car and pedestrian met at a crossing and the
pedestrian had to decide whether to cross in front of the car.
In all interactions, the car approached the crossing area, then
slowed and stopped to allow the pedestrian to cross.
If the car and pedestrian did not meet within a sufficiently
close vicinity to require a decision, we did not code it as an
interaction. For example, if the pedestrian crossed before the
vehicle was near the crossing area, or if the vehicle passed
the intersection before the pedestrian arrived near the crossing
area, it was not counted as an interaction. Likewise, no interac-
tion was counted if the vehicle was nearby, but the pedestrian’s
decision to cross was determined by another, closer vehicle,
or by another agent such as a parking attendant. Some by-
standers who encountered the vehicle at a close distance were
also interviewed even if they had not explicitly interacted with
the vehicle.
Groups walking together in our study were treated as a sin-
gle unit for the purpose of interviewing, since they tended to
speak to each other and often reacted in cohort to the presence
and actions of the vehicle, before and during the interaction.
Vinkhuyzen et al. noted that groups of pedestrians often func-
tion as a whole unit and recommend for autonomous vehicle
algorithms to treat groups as a single cohort [35]. However, we
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
214
observed that some individuals in groups exhibited different
behaviors in response to the autonomous vehicle (e.g., one
crossed in front of the car and the other waited for the car
to pass before crossing). Therefore, we coded video of the
interactions for each individual separately.
Video Interaction Analysis
We coded the video data to characterize recognizable features
and interaction patterns [16, 26]. Three coders independently
coded the data, and the final codes for each of the interactions
were determined through majority consensus when two or
more coders were aligned. From the 142 total interactions,
only two interactions did not yield agreement from the three
coders. These cases were resolved by the three coders through
consensus coding under the guidance of the lead researcher.
Crossing Decision
Our first category of analysis was the decision to cross: when
reaching the point of interaction, did the pedestrian decide to
cross in front of the car, to cross behind the car, or to wait until
the car had passed before crossing?
Walking Behavior
Next, we analyzed the walking behavior of pedestrians who
crossed in front of or behind the vehicle. Pedestrians either
came to a complete stop, briefly paused but without stopping
entirely, slowed down more gradually, or either maintained
or increased their speed. The duration of time a pedestrian
stopped for a car was recorded. Coders defined a “stop” when
pedestrians suspended their pace for one second or longer with
both feet planted on the ground. Coders defined a “pause” as
analogous to how a vehicle makes a “rolling stop,” meaning
that pedestrians decelerated their pace almost to a stop (but
without coming to a full stop), for less than one second, and
then resumed their pace. A pause conveys intent to continue
walking and to take right of way, whereas a full stop does not
convey any clear intent, and implies a delay before deciding
whether to take right of way.
Pathways
We also coded pathways by which pedestrians crossed the
car’s path. The four path options were: to cross straight across
an intended pathway; to cross on a curved path, providing
additional space in front of the car; to cross on an angled path
in front of the car; or to cross behind the car (Figure 4).
Glancing behavior
We characterized participants’ glancing behavior, coding
whether they looked at the car before, during, and after cross-
ing. Where relevant, we added qualitative notes and descrip-
tions that expanded on or did not fit into the defined codes.
RESULTS
In total, we recorded video of 113 interactants in Mexico
City and 81 interactants in Colima. Our analysis revealed
considerable differences in the way pedestrians in Mexico City
and Colima respond to the autonomous vehicles, including
the change of pace and path. Tables 1 through 4 compare the
walking and looking behavior of pedestrians who interacted
with the car in Mexico City and Colima.
Figure 4: Potential interactant crossing paths
A chi-square test revealed a significant relationship between
city and decision to cross in front or cross behind/wait to cross,
χ2
(1, N=166) = 13.49, p=0.0002. A much larger proportion
of pedestrians in Colima, compared to those in Mexico City,
decided either to cross behind the car, or not to cross until the
car had passed. Conversely, a greater percentage of pedestrians
in Mexico City crossed in front of the vehicle than in Colima
(Table 1).
Likewise, for pedestrians who crossed in front, a chi-square
test revealed a significant relationship between city and cross-
ing pathways
χ2
(2, N=134) = 7.54, p=0.023, with a greater
proportion of participants in Mexico City taking a curved path
than in Colima (Table 2). Post hoc analysis confirmed signifi-
cance in the relationship between curved pathway and locality
(adjusted Pearson residuals were 2.52)
With respect to walking behavior prior to crossing, a chi-square
test revealed a significant relationship between city and walk-
ing pace before crossing
χ2
(3, N=133) = 22.36, p<.0001.
Pedestrians in Colima were more likely to stop or pause be-
fore crossing, while those in Mexico City were more likely
to maintain or increase their pace before crossing (Table 3).
Adjusted Pearson residuals calculations reveal that the great-
est statistical significance is between maintaining/increasing
speed and the locality (adjusted residual value of 4.10), and to
lesser extents stop and pause (adjusted residual values of 3.09
and 2.54 respectively). No significant difference was seen for
slowing down (adjusted residual value of 0.76).
We weren’t able to run a chi-square test on all glancing be-
haviors, as the data categories are not mutually exclusive.
Therefore, we began with a non-statistical comparison of per-
centages, and observed that glancing behavior was more fre-
quent overall for pedestrians in Colima than those in Mexico
City (Table 4). We then conducted a chi-square test for each of
the glancing scenarios separately, but we found no statistically
significant difference between cities for any of the different
glancing behaviors (pvalues were all > 0.05).
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
215
Table 1: Crossing Decision
Mexico City Colima
Crossing Decision (N=97) (N=69)
Crossed in Front 88 (91%) 46 (67%)
Crossed Behind 6 (6%) 1 (1%)
Did not Cross
(Waited to Cross) 3 (3%) 22 (32%)
Table 2: Crossing Pathways
Crossing Pathway Mexico City Colima
In Front of Car (N=88) (N=46)
Straight 61 (69%) 34 (74%)
Curved 15 (17%) 1 (2%)
Angled 12 (14%) 11 (24%)
The discrepancies in the total N counts between Tables 1
through 4 are due to cases of incomplete data. For exam-
ple, we analyze N=29 in Table 4 for glancing behavior among
Colima pedestrians as opposed to N=45 (the total number of
Colima pedestrians who crossed either in front of or behind the
vehicle) because we compare glancing only in cases where we
have data before, during, and after the interactions. Cases of
incomplete data were due to factors such as the relative motion
between the vehicle and pedestrian, and visual obstruction by
external objects or other vehicles.
Participants’ perception of autonomy was coded using their
answers to several interview questions, since not all partici-
pants answered every question, and since this provided a more
refined and reliable understanding of their perception of the
car’s autonomy. The primary questions we used to clarify
their perceptions were: What did you observe about the car?,
Was there anything special about the car that you observed?,
How did you think the car was moving?,Did you think the
car was moving on its own?, and Could you tell there was
no driver in the car?. Comments made in response to other
questions sometimes shed additional light on their perception,
and in a few cases were used when answers to the primary
questions were not enough to clarify perceptions. Data for 12
participants interviewed in Mexico City and 1 in Colima was
too incomplete to ascertain their perception of autonomy, and
are not represented in this table.
A chi-square test revealed no significant relationship between
city and perception of autonomy,
χ2
(3, N=129) = 6.20,
p=0.10. This suggests that differences in behavior between
Table 3: Walking Behavior Prior to Vehicle Interaction
Mexico City Colima
Walking Behavior (N=91) (N=42)
Stop 13 (14%) 16 (38%)
Pause 13 (14%) 14 (33%)
Slow Down 13 (14%) 4 (10%)
Maintain/Increase Speed 52 (57%) 8 (19%)
Table 4: Glancing Behavior (categories non-exclusive)
Mexico City Colima
Glanced at Car (N=82) (N=29)
Before Crossing 69 (84%) 29 (100%)
During Crossing 37 (45%) 17 (59%)
After Crossing 27 (33%) 16 (55%)
Table 5: Perception of Autonomy
Mexico City Colima
Perceived the Vehicle as... (N=59) (N=70)
Autonomous 24 (49%) 28 (40%)
Remotely Controlled 7 (12%) 18 (26%)
Undecided or
Mixed Impression 5 (8%) 8 (11%)
Normal (Manually Driven) 23 (39%) 16 (23%)
the two locations are not attributable to differences in whether
or not pedestrians perceived the car to be autonomous.
Additional Observations
In addition, some unanticipated patterns of behavior emerged
during the coding process. These occasional phenomena were
interesting and we discuss these here.
Group versus Individual Behavior
We noticed interesting group behavior among participants.
When one pedestrian or group had started crossing, others
tended to cross without hesitation, more so than when they
were the first to cross. Furthermore, pairs and groups of pedes-
trians tended to mirror each other’s behavior, and often func-
tioned as a single interaction unit. In these cases, we coded
them as single interactions.
Curiosity and Playfulness
We found that some individuals in Mexico City exhibited
playfulness with the autonomous vehicle. Some participants
and bystanders seemed to test the car by intentionally walking
in front of it or nudging friends toward the car.
Grooming
We observed that pedestrians in Mexico City often exhibited
grooming behaviors before crossing the street such as adjust-
ing their clothing, or preening their hair.
DISCUSSION
Insights on Pedestrian Behavior
In conducting this exploratory study, we aimed to observe
and discover patterns and behaviors in pedestrians’ responses
to autonomous vehicles, including crossing decisions, pace,
path, glances, and gestures. These observations are necessary
to inform more directed studies in the future, which can ex-
amine and analyze specific behaviors in greater depth. The
exploratory approach allowed us to observe emergent behav-
iors without prematurely ruling out outcome variables. At
the same time, because we did not know in advance which
variables would emerge as interesting, we were not able to
ensure that we could capture all relevant data completely.
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
216
It is interesting to note the differences in walking behavior
between participants in Mexico City and Colima. By far,
the most common walking behavior in Mexico City prior to
interacting with the vehicle was to maintain or increase pace,
while relatively few people stopped (Table 3). The opposite
was true in Colima. This may be due to a generally more
hurried pace of life in Mexico City, and a more relaxed pace in
Colima, and may reflect general regional differences between
large metropolitan and small coastal cities. Alternatively, it
may be more specifically related to large differences in traffic
volume. Mexico City experiences extremely heavy traffic on a
day-to-day basis, and according to the TomTom Traffic Index,
ranks #1 in the world for traffic congestion [33], while Colima
traffic is comparably milder), which could lead to differences
in risk-taking behavior by pedestrians.
Similar reasons may underlie the greater likelihood of Mexico
City pedestrians to cross in front of the vehicle during an
interaction, as well as the greater relative likelihood of Colima
pedestrians compared to Mexico City pedestrians, not to cross
until the vehicle had passed.
The only significant difference in crossing pathway between
cities was that in Mexico City more participants made a curved
path than in Colima. This may be due to the fact that in both
cities, the parking lot entrance was gated and relatively narrow,
while the street was open and wide, allowing more space for
walking in different pathways, and in Colima, pedestrians
interacted with the vehicle more often while crossing the street
than in Mexico City. The difference could also be influenced
by pedestrian destination. We are not able to ascertain the
specific reasons from the data we collected, or to draw more
meaningful insights from differences in paths taken. However,
since we did observe a significant difference, more thorough
investigation of factors that influence pathways should be
undertaken in future studies.
Glancing behavior seemed to vary between Mexico City and
Colima, with more frequent glancing overall in Colima, how-
ever differences were not statistically significant.
We also observed visible differences in other aspects of in-
teractions. In Mexico City, more pedestrians tended to travel
in groups and to follow leading individuals or groups already
crossing. They also appeared to be more comfortable crossing
in front of the autonomous vehicle, exhibited more curiosity
and playful behavior, testing the car’s capabilities, and also
exhibited more grooming behavior than Colima.
Limitations
We acknowledge that there are some limitations of this study.
First, although both cars were outfitted with the same apparati
indicating autonomy (GoPro cameras, a faux LiDAR , and
decals), due to logistical complications we were not able to
use the same type of vehicle in both locations. Differences in
the make, size, and color can all influence pedestrian’s reaction
to the car [5, 43]. Prior studies have found that people often
anthropomorphize the front view of cars into faces and then
attribute personality characteristics to them, such as maturity,
gender, or aggressiveness[18, 43, 44, 21]. While these studies
were conducted in laboratory settings, it is certainly possible
that similar inferences and perceptions might happen during in-
person interactions with vehicles on the road. We acknowledge
that such phenomena could influence pedestrians’ responses
to the vehicles used in our study, and we understand the need
to standardize the type of the vehicle in future road studies.
Second, the crossing areas where we observed the interactions
differed in several ways: in Mexico City pedestrian traffic
and interactions were much more confined to the sidewalk
crossing the parking lot entrance, while in Colima, pedestrians
traffic was more broadly spread out and interactions with the
vehicle happened both on the sidewalk crossing the parking
lot entrance and in crossing areas on the street adjacent to
the parking lot. Traffic along the streets in Colima was both
light and slow during the times we observed, regulated by a
series of speed bumps, and the street had a parking lane next
to the sidewalk. Although the car exhibited the same behav-
ior (slowing, stopping, and waiting for pedestrians) in both
street-crossing and entrance-crossing interactions, even slow
thru-traffic on the street can affect safe walking paths differ-
ently than traffic into and out of parking lots. On the other
hand, in Mexico City the parking lot entrance was wider and
less regulated than that in Colima, and the parking lot gate was
more visually obstructive than that in Colima. These differ-
ences may add layers of complexity to pedestrian navigation
in both cities, which can make some parts of the study (such
as crossing pathways) difficult to meaningfully compare. We
chose the specific parking lot in each location, understanding
these differences, with safety foremost in mind, as the best
options available for the driver to navigate the full route safely
while wearing the seat costume.
Third, there was also a difference in weather conditions. The
weather in Mexico City was cloudy with scattered showers,
and the weather in Colima had a mix of cloudy and sunny
weather. Weather can also affect pedestrians’ behavior, both
in terms of their perception of the car and the time they are
willing to spend interacting with the car.
Fourth, we have conducted the study with only autonomous
vehicles. A baseline study would allow us to isolate which
aspects of the behavior are consequences of the autonomy of
the vehicle they were interacting with and whether parallel
differences in behavior between the two cities are also seen
in interactions with non-autonomous vehicles. If behavioral
differences are proven to be parallel, this would reinforce the
importance of designing for regional differences in the devel-
opment of autonomous vehicles. If not, this could highlight
the greater impact of culture on pedestrian interactions with
autonomous vs. non-autonomous vehicles. While time con-
straints precluded our ability to conduct the baseline test, we
recognize the resulting limitations and importance of including
the baseline condition in future studies.
It is important to note that Mexico City and Colima, like all
cities, have a plethora of unique characteristics and should not
be assumed to be representative of other cities simply on the
basis of similar sizes or geographical location. It would be an
overstatement to claim that behavioral differences observed
between these cities necessarily generalize to represent those
between other similarly paired cities, in Mexico or elsewhere.
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
217
Indeed, this is the first study of its kind, and can serve as
an indicator of possible broader trends, and as such, should
encourage further studies in a variety of regions and cultures.
Design Implications
Our study showed that pedestrians in different types of cities
exhibit different behaviors while interacting with autonomous
cars, which suggests that regional context is important in de-
signing autonomous car technology. For example, in cities
where vehicles typically do not stop for pedestrians except at
controlled intersections, it may adversely affect safety for au-
tonomous cars to yield at non-controlled intersections. This di-
vergence from typical local behavioral patterns could confuse
expectations between pedestrians and drivers, which could
lead to increased accident rates. Or, in locations where drivers
are more aggressive, autonomous vehicles programmed to op-
erate under strict rules of orderly turn-taking may not be able
to safely merge with traffic, and as a side effect may cause
significant backups.
Understanding regional differences in pedestrian behavior
(trends in pedestrian crossing trajectories, walking behavior
prior to crossing, different glancing patterns, and tendencies
to stop and groom in the presence of autonomous cars), as we
observed in this study, can enable vehicle automation to more
accurately and quickly predict the paths pedestrians may take,
and inform the vehicle when it should be more vigilant and
better prepared to yield to pedestrians.
In locations where pedestrians are generally more likely
to cross in front of vehicles, greater caution might be pro-
grammed into automated cars to anticipate the possibility of
pedestrians crossing in front of other cars ahead, and to scan
further ahead or allow greater stopping distance when traffic
may obscure sensing of the road further ahead.
Patterns like pedestrian grooming, that we saw more often in
Mexico City, could reflect and derive from different causes.
First, they may be related to an awareness of being recorded
by cameras on the car, and a subconscious tendency to groom
in the presence of cameras. Alternatively/additionally they
could be due to a reflexive response to an unusual vehicle
appearance or behavior, or a socially acceptable way to taking
a little extra time to gaze curiously at the car. In the near term,
while autonomous vehicles are still a novelty, it can be helpful
to know that pedestrians might stop in the path of the car and
groom at times when they otherwise would be expected to
continue walking. These behaviors may go away over time,
as autonomous cars become more ubiquitous, and that can be
factored into updates to the technology in the future.
Leveraging knowledge of city type and associated pedestrian
behaviors can guide how or where the car should focus its
attention to locate pedestrians, predict behavior, and avert
potential dangers.
Future Work
Through this investigation, we have observed regional dif-
ferences in pedestrian behavior during interactions with au-
tonomous cars in a large metropolitan city vs. a small coastal
city. However, we understand that there are multiple factors
that could be contributing to these differences, and therefore
hesitate to make any assertions that these differences are due to
location specifically. Our current paper examines what pedes-
trians do, in interactions with autonomous cars, primarily
through analysis of video data. Do pedestrian attitudes about
autonomous cars also differ between these kinds of cities? Our
next step is to analyze additional qualitative data from inter-
views we conducted with participants, to better understand
their overall perceptions about the vehicle, and interactions
with it. In addition, we aim to better understand the pedestri-
ans’ motivations behind their behavioral patterns.
Another important consideration is whether the differences
in pedestrian behavior we observed toward an autonomous
vehicle are reflective of different pedestrian behaviors toward
non-autonomous vehicles, or vehicles in general. If so, could
we predict similar responses to autonomous vehicles in cul-
tures which exhibit similar behaviors toward manually driven
vehicles? To test this, we plan to analyze pedestrian interac-
tions with other (non-autonomous cars) captured in our video,
to understand whether parallel differences in behavior are
observed for non-autonomous cars in these cities.
Additionally, do behavior patterns in large and small city set-
tings in Mexico transfer to similar settings in other countries?
Or do other regional and cultural factors outweigh or obscure
influence related to city size? To verify the broader applicabil-
ity of these results and to explore other cultural effects, we are
repeating this study in other countries around the world.
CONCLUSION
From our study, we have observed significant differences in
pedestrian behavior between the two cities. Pedestrians in
Mexico City were more likely to cross in the vehicle’s path,
and much more likely to maintain their pace, whereas those
in Colima tended to stop in front of the car. With regards
to pathways, pedestrians in Mexico City were more likely to
curve their path when crossing in front of the car. Pedestrians
in Colima waited until the car passed before crossing, more
often than those in Mexico City. These differences highlight
the importance of considering regional context when designing
autonomous car behavior, in order to minimize accidents and
protect safety of pedestrians interacting the vehicle.
ACKNOWLEDGEMENTS
This study was funded with support from Asociación Mexi-
cana de Cultura A.C. in Mexico City and by the Center for
Design Research Affiliates program at Stanford. We thank
ITAM graduate students Hugo Hernández, Eduardo Martinez,
and Farid Fajardo, and University of Colima graduate students
Monserrat Urzúa and Ignacio Ruíz for their help in running the
study in Mexico City. We thank the volunteers who assisted
in running the experiment in Colima, especially: University of
Colima Professors Sanely Gaytán-Lugo, Miguel Rodríguez,
Juan Contreras and Antonio Guerrero, University of Colima
Graduate students Monserrat Urzúa, Ignacio Ruíz and Telés-
foro González, and students of the University of Colima 2016
Human Computer Interaction undergraduate course. In addi-
tion, we are grateful to David Miller and David Sirkin for help
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
218
with analysis and editing. This work was conducted under
Stanford IRB Protocol #32896.
REFERENCES
1. National Highway Traffic Safety Administration. 2014.
Traffic Safety Facts: Pedestrians, DOT HS 811 888.
2.
Michael Clamann, Miles Aubert, and Mary L Cummings.
2017. Evaluation of vehicle-to-pedestrian communication
displays for autonomous vehicles. Technical Report.
3.
Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993.
Wizard of Oz studies—why and how. Knowledge-based
systems 6, 4 (1993), 258–266.
4. Alex Davies. 2017. Self-driving cars flock to Arizona,
land of good weather and no rules. (2017). https://www.
wired.com/story/mobileye-self- driving-cars- arizona/
5. Graham Davies and Darshana Patel. 2005. The influence
of car and driver stereotypes on attributions of vehicle
speed, position on the road and culpability in a road
accident scenario. 10, 1 (2005), 293–312.
6. D. Dey and J. Terken. 2017. Pedestrian Interaction with
Vehicles: Roles of Explicit and Implicit Communicaiton.
ACM, 109 – 113.
7. Pitt Stop, September 2017.
https://www.economist.com/news/business/
21707263-uber- launches-its- first-self- driving-cars- pitt-stop
8. Berthold Färber. 2016. Communication and
communication problems between autonomous vehicles
and human drivers. In Autonomous Driving. Springer,
125–144.
9. Uwe Franke, Dariu Gavrila, Steffen Gorzig, Frank
Lindner, F Puetzold, and Christian Wohler. 1998.
Autonomous driving goes downtown. IEEE Intelligent
Systems and Their Applications 13, 6 (1998), 40–48.
10. Harold Garfinkel. 1967. Studies in Ethnomethodology.
Prentice-Hall, Inc.
11. Nicolas Guéguen, Sébastien Meineri, and Chloé
Eyssartier. 2015. A pedestrian’s stare and drivers’
stopping behavior: A field experiment at the pedestrian
crossing. Safety science 75 (2015), 87–89.
12. Carolynn C Hamlet, Saul Axelrod, and Steven
Kuerschner. 1984. Eye contact as an antecedent to
compliant behavior. Journal of Applied Behavior
Analysis 17, 4 (1984), 553–557.
13. Mark Harris. 2017. Google’s self-driving cars are
accident-prone – but it may not be their fault. (Jul 2017).
http://gu.com/p/4a5vy/sbl
14. M Hijar, E Vázquez-Vela, and C Arreola-Risa. 2003.
Pedestrian traffic injuries in Mexico: a country update.
Injury control and safety promotion 10, 1 (2003), 37–43.
15. Rahat Hossain. 2013. Drive Thru Invisible Driver Prank.
(2013). https://www.youtube.com/watch?v=xVrJ8DxECbg
16. Brigitte Jordan and Austin Henderson. 1995. Interaction
analysis: Foundations and practice. The journal of the
learning sciences 4, 1 (1995), 39–103.
17. Wendy Ju. 2015. The design of implicit interactions.
Synthesis Lectures on Human-Centered Informatics 8, 2
(2015), 1–93.
18. Wilhelm Klatt, Alvin Chesham, and Janek Lobmaier.
2016. Putting up a big front: Car design and size affect
road-crossing behaviour. 11, 7 (2016).
19. CL Klienke. 1977. Compliance to requests made by
gazing and touching. J. exp. soc. Psychol. 13 (1977),
218–223.
20. T Lagstrom and Victor Malmsten Lundgren. 2015.
AVIP-Autonomous vehicles interaction with pedestrians.
Master of Science Thesis, Chalmers University of
Technology (2015).
21.
J.R. Landwehr and A.L. McGill. 2011. It’s Got the Look:
The Effect of Friendly and Aggressive “Facial”
Expressions on product liking and Sales. 75, 3 (2011),
132–146.
22. Adeel Lari, Frank Douma, and Ify Onyiah. 2015.
Self-Driving Vehicles and Policy Implications: Current
Status of Autonomous Vehicle Development and
Minnesota Policy Implications. Minnesota Journal of
Law, Science & Technology 16, 2 (2015), 735–769.
23. Jesse Levinson, Jake Askeland, Jan Becker, Jennifer
Dolson, David Held, Soeren Kammel, J Zico Kolter, Dirk
Langer, Oliver Pink, Vaughan Pratt, and others. 2011.
Towards fully autonomous driving: Systems and
algorithms. In Intelligent Vehicles Symposium (IV), 2011
IEEE. IEEE, 163–168.
24. David Fernández Llorca, Vicente Milanés, Ignacio Parra
Alonso, Miguel Gavilán, Iván García Daza, Joshué Pérez,
and Miguel Ángel Sotelo. 2011. Autonomous pedestrian
collision avoidance using a fuzzy steering controller.
IEEE Transactions on Intelligent Transportation Systems
12, 2 (2011), 390–401.
25. David R Millen. 2000. Rapid ethnography: time
deepening strategies for HCI field research. In
Proceedings of the 3rd conference on Designing
interactive systems: processes, practices, methods, and
techniques. ACM, 280–286.
26. Lars Müller, Malte Risto, and Colleen Emmenegger.
2016. The social behavior of autonomous vehicles. In
Proceedings of the 2016 ACM International Joint
Conference on Pervasive and Ubiquitous Computing:
Adjunct. ACM, 686–689.
27. H Naci, Dan Chisholm, and Timothy D Baker. 2009.
Distribution of road traffic deaths by road user group: a
global comparison. Injury prevention 15, 1 (2009), 55–59.
28. Paul K Piff, Daniel M Stancato, Stéphane Côté, Rodolfo
Mendoza-Denton, and Dacher Keltner. 2012. Higher
social class predicts increased unethical behavior.
Proceedings of the National Academy of Sciences 109, 11
(2012), 4086–4091.
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
219
29. Malte Risto, Colleen Emmenegger, Erik Vinkhuyzen,
Melissa Cefkin, and Jim Hollan. 2017. Human-Vehicle
Interfaces: The Power of Vehicle Movement Gestures in
Human Road User Coordination. (2017).
30. Dirk Rothenbücher, Jamy Li, David Sirkin, Brian Mok,
and Wendy Ju. 2016. Ghost driver: A field study
investigating the interaction between pedestrians and
driverless vehicles. In Robot and Human Interactive
Communication (RO-MAN), 2016 25th IEEE
International Symposium on. IEEE, 795–802.
31.
Steven E Shladover. 2016. The Truth about“ Self-Driving”
Cars. Scientific American 314, 6 (2016), 52–57.
32. Virginia P Sisiopiku and D Akin. 2003. Pedestrian
behaviors at and perceptions towards various pedestrian
facilities: an examination based on observation and
survey data. Transportation Research Part F: Traffic
Psychology and Behaviour 6, 4 (2003), 249–274.
33. tomtom.com. 201. TomTom Traffic Index, Measuring
Congestion Worldwide. (201).
https://www.tomtom.com/en_gb/trafficindex/list?
citySize=LARGE&continent=ALL&country=ALL
34. Christopher Paul Urmson, Ian James Mahon, Dmitri A
Dolgov, and Jiajun Zhu. 2015. Pedestrian notifications.
(Feb. 10 2015). US Patent 8,954,252.
35. Erik Vinkhuyzen and Melissa Cefkin. 2016. Developing
Socially Acceptable Autonomous Vehicles. 2016, 1
(2016), 522–534. http://www.epicpeople.org/
36. Ian Walker and Mark Brosnan. 2007. Drivers’ gaze
fixations during judgements about a bicyclist’s intentions.
Transportation research part F: traffic psychology and
behaviour 10, 2 (2007), 90–98.
37.
Astrid Weiss, Regina Bernhaupt, Manfred Tscheligi, Dirk
Wollherr, Kolja Kuhnlenz, and Martin Buss. 2008. A
methodological variation for acceptance evaluation of
human-robot interaction in public places. In Robot and
Human Interactive Communication, 2008. RO-MAN 2008.
The 17th IEEE International Symposium on. IEEE,
713–718.
38. WEpods. 2017. The first autonomous vehicle on Dutch
public roads. (2017). http://wepods.com
39. Wikipedia contributors. 2018. Breaching experiment —
Wikipedia, The Free Encyclopedia.
https://en.wikipedia.org/w/index.php?title=Breaching_
experiment&oldid=832512318. (2018). [Online; accessed
7-May-2018].
40. Wikipedia.org. 2018a. Greater Mexico City. (2018).
https://en.wikipedia.org/wiki/Greater_Mexico_City
41. Wikipedia.org. 2018b. Greater Mexico City. (2018).
https://en.wikipedia.org/wiki/Colima_City
42. Wikipedia.org. 2018c. Mexico City. (2018).
https://en.wikipedia.org/wiki/Mexico_City
43.
Sonia Windhager, Fred Boostein, Karl Grammer, Elisabet
Oberzaucher, Hasen Said, Dennis Slice, Truls
Thostensen, and Katrin Schaefer. 2012. “Cars have their
own faces”: cross-cultural ratings of car shapes in
biological (stereotypical) terms. 33, 2 (2012), 109–120.
44.
S. Windhager, D.E. Slice, K. Schaefer, E. Oberzaucher, T.
Thorstensen, and K. Grammer. 2008. Face to Face: The
Perception of Automotive Designs. 19, 4 (2008),
331–346.
Session 5: Coordination with Other Road Users AutomotiveUI ’18, Toronto, Canada
220
... Features of the AV, such as eHMIs, often go unnoticed as well, depending on the salience of the eHMI signal (Cefkin et al., 2019;Chen et al., 2020;Habibovic et al., 2018;Hensch et al., 2020). Instead, pedestrians interacting with driverless vehicles appear to look longer out of curiosity (Li et al., 2020), sometimes accompanied by a certain playfulness (Currano et al., 2018). Hensch et al. (2020) pointed out that not noticing a driver is not the same as seeing that the driver's seat is empty. ...
... It is unknown how many participants believed the vehicle was driving automatically. Previous Wizard-of-Oz research showed believability percentages ranging from 97 to 100% (Faas & Baumann, 2021;Habibovic et al., 2018;Joisten et al., 2020), but also 60% to 88% (Currano et al., 2018;Faas & Baumann, 2019;Hensch et al., 2020;Large et al., 2023;Li et al., 2020;Moore et al., 2019;Rothenbücher et al., 2016), and 40% (Rodríguez Palmeiro, Van der Kint, Vissers, et al., 2018). These percentages appear to be contextdependent, for example, whether a modern-looking vehicle was used. ...
Preprint
Full-text available
As automated vehicles (AVs) become increasingly popular, the question arises as to how cyclists will interact with such vehicles. This study investigated (1) whether cyclists spontaneously notice if a vehicle is driverless, (2) how well they perform a driver-detection task when explicitly instructed, and (3) how they carry out such tasks. Using a Wizard-of-Oz method, 37 participants cycled a designated route and encountered an AV multiple times in two experimental sessions. In Session 1, participants cycled the route uninstructed, while in Session 2, they were instructed to verbally report whether they detected the presence or absence of a driver. Additionally, we recorded the participants’ gaze behaviour with eye-tracking and their responses in post-session interviews. The interviews revealed that 30% of the cyclists spontaneously mentioned the absence of a driver (Session 1), and when instructed (Session 2), they detected the absence and presence of the driver with 93% accuracy. The eye-tracking data showed that cyclists looked more frequently and longer at the vehicle in Session 2 compared to Session 1. Furthermore, participants exhibited intermittent sampling of the vehicle, and they looked in front of the vehicle when it was far away and towards the windshield region when it was closer. The post-session interviews also indicated that participants were curious, felt safe, and reported a need to receive information about the AV’s driving state. In conclusion, cyclists can detect the absence of a driver in the AV, and this detection may influence their perceptions of safety. Further research is needed to explore these findings in real-world traffic conditions.
... Over recent years, there have been several high-profile studies which have utilised a Wizard-of-Oz (WoZ) approach to simulate an AV [13][14][15]. In these on-road studies, the so-called 'Ghost Driver' [13] is typically concealed within a bespoke seat-suit thereby ensuring they cannot be seen by pedestrians and other road users if they make a cursory glance. ...
... Such an approach provides high ecological validity, enabling researchers to understand how pedestrians might naturally behave when faced with a 'genuine' driverless vehicle in real-world crossing/traffic scenarios. These studies have revealed hesitancy in crossing and pedestrians 'playing' with the car to test sensor capabilities [14]. Nevertheless, pedestrians have also expressed difficulty interpreting the behaviour of the vehicle [15], and reported feeling less safe, and 'doubtful' about their interaction with the AV [16]. ...
Conference Paper
We employed the 'Ghost Driver' methodology to emulate an autonomous vehicle (AV) and explored pedestrians' (n=520) crossing behaviour in response to external human-machine interfaces (eHMIs). Three eHMI designs were created to replace absent pedestrian-driver communication; each had different anthropomorphic elements and were identified as 'explicit', 'implicit' and 'low' to reflect the conspicuity of anthropomorphism. They were displayed on an LED matrix and strip mounted to the front of a Nissan Leaf vehicle , which was driven around the University campus over 5 days. Video analysis highlighted differences in pedestrians' behaviour, with the explicit anthropomorphism eHMI extending crossing time and attracting more visual attention. Additionally, some pedestrians continued to use gestures, ostensibly to indicate their intention to cross or to thank the vehicle, despite the absence of a visible driver. While preliminary findings support the application of anthropomorphism in AV-pedestrian communications, further research will explore designs in more controlled, experimental settings.
... More interesting findings regarding pedestrian reactions have been discovered when there was no perceivable driver in the vehicle. Some studies adopted a "ghost driver" approach where the human driver controlling the vehicle was hidden inside the driver's seat [9,29,37]. Pedestrians exhibited curiosity, such as testing the vehicle and taking photos [9,10]. They also showed different crossing and looking duration when crossing in a group or alone [29]. ...
... Some studies adopted a "ghost driver" approach where the human driver controlling the vehicle was hidden inside the driver's seat [9,29,37]. Pedestrians exhibited curiosity, such as testing the vehicle and taking photos [9,10]. They also showed different crossing and looking duration when crossing in a group or alone [29]. ...
Conference Paper
Full-text available
Shared space reduces segregation between vehicles and pedestrians and encourages them to share roads without imposed traffic rules. The behaviour of road users (RUs) is then controlled by social norms, and interactions are more versatile than on traditional roads. Autonomous vehicles (AVs) will need to adapt to these norms to become socially acceptable RUs in shared spaces. However, to date, there is not much research into pedestrian-vehicle interaction in shared-space environments, and prior efforts have predominantly focused on traditional roads and crossing scenarios. We present a video observation investigating pedestrian reactions to a small, automation-capable vehicle driven manually in shared spaces based on a long-term naturalistic driving dataset. We report various pedestrian reactions (from movement adjustment to prosocial behaviour) and situations pertinent to shared spaces at this early stage. Insights drawn can serve as a foundation to support future AVs navigating shared spaces, especially those with a high pedestrian focus.
... The AV expresses fear when a skateboarding pedestrian suddenly appears in front without watching out for traffic. [17,18,69,79,83,104]. Note that each video (video a, b, or c) comprised these four scenarios, and the only difference was what was shown on the AV's display (no eyes, neutral eyes, or emotional eyes). ...
Article
Full-text available
The ability of autonomous vehicles (AVs) to interact socially with pedestrians poses a significant impact on their integration with urban traffic. This is particularly important for vehicle-pedestrian shared spaces due to increased social requirements in comparison to vehicular roads. Current pedestrian experience in shared spaces suffers from negative attitudes towards AVs and the consequently low acceptability of AVs in these spaces. HRI work shows that the acceptability of robots in public spaces can be positively impacted by their perceived sociability (i.e., possessing social skills), which can be enhanced by their ability to express emotions. Inspired by this approach, we follow a systematic process to design emotional expressions for AVs using the headlight ("eye'') area and investigate their impact on perceived sociability of AVs in shared spaces, by conducting expert focus groups (N=12) and an online video-based user study (N=106). Our findings confirm that the perceived sociability of AVs can be enhanced by emotional expressions indicated through emotional eyes. We further discuss implications of our findings for improving pedestrian experience and attitude in shared spaces and highlight opportunities to use AVs' emotional expressions as a new external communication strategy for future research.
... The second interpretation is the evaluation method. Dey et al. [15] listed four categories of evaluation for 70 eHMI concepts: controlled outdoor [12,22,35], VR environment [5,23,29], video [4,16,20], and controlled indoor experiment [27,32,43]. Due to manufacturing limitations, the majority of comparative studies for multiple eHMI concepts, including the eye, are conducted using focus groups [10], video surveys [3], taxonomies [15], and literature Figure 2: The study step. ...
... Their observations on real roads showed that most pedestrians could seamlessly cross the street without any explicit external human-machine interfaces (eHMI). Other studies have since used the ghost driver method and found that most pedestrians may not require dedicated signaling cues apart from the motion of the AV (Currano et al. 2018;Moore et al. 2019). In addition, researchers have also used a programmable vehicle with a scale of roughly 1:12 of a Mercedes-Benz Smart car as an oncoming vehicle to evoke emotional responses from pedestrians (Zimmermann and Wettach 2017). ...
Article
Full-text available
With the development of autonomous vehicle (AV) technology, understanding how pedestrians interact with AVs is of increasing importance. In most field studies on pedestrian crossing behavior when encountering AVs, pedestrians were not permitted to physically cross the street due to safety restrictions. Instead, the physical crossing experience was replaced with indirect methods (e.g., by signalizing with gestures). We hypothesized that this lack of a physical crossing experience could influence the participants’ crossing behavior. To test this hypothesis, we adapted a reference study and constructed a crossing facility using a virtual reality (VR) simulation. In a controlled experiment, the participants encountered iterations of oncoming AVs. For each interaction, they were asked to either cross the street or signify their crossing decisions by taking steps at the edge of the street without crossing. Our study reveals that the lack of a physical crossing can lead to a significantly lower measured critical gap and perceived stress levels, thus indicating the need for detailed analysis when indirect methods are applied for future field studies. Practical Relevance: Due to safety requirements, experiments will continue to measure participants’ crossing behavior without permitting them to physically walk in front of an oncoming vehicle. Our study was the first attempt to reveal how this lack of crossing could potentially affect pedestrians’ behavior, and we obtained empirical evidence in support of our hypothesis, thus providing insights for future studies.
Chapter
The concept of emotion is closely related to affect, which is an encompassing term, consisting of emotions, feelings, moods, and evaluations. Organizations, conferences, and special issues related to emotion and design in human factors and ergonomics have been burgeoning. Core affect is object-free without being directed at anything, that is, no emotional associations, whereas affective quality is related to or belongs to the product and has the ability to cause a change in core affect during the human-product interaction process so that the product is attributed with creating emotional associations. Quality function deployment is a method that first transforms qualitative customer needs into quantitative parameters, then deploys the functions to form product quality, and then translates product quality into design elements, and finally to specific manufacturing processes. Emotional design has been well recognized in the domain of human factors and ergonomics. The chapter reviews related models and methods of emotional design.
Conference Paper
Full-text available
Autonomous vehicles will have to coordinate their behavior with human road users such as drivers and pedestrians. The majority of recently proposed solutions for autonomous vehicle-to-human communication consist of introducing additional visual cues (such as lights, text and pictograms) on either the car’s exterior or as projections on the road. We argue that potential shortcomings in the visibility (due to light conditions, placement on the vehicle) and immediate understandability (learned, directive) of many of these cues make them alone insufficient in mediating multi-party interactions in the busy intersections of day-to-day traffic. Our observations of real-world human road user behavior in urban intersections indicate that movement in context is a central method of communication for coordination among drivers and pedestrians. The observed movement patterns gain meaning when seen within the context of road geometry, current road activity, and culture. While all movement communicates the intention of the driver, we highlight the use of movement as gesture, done for the specific purpose of communicating to other road users and give examples of how these influence traffic interactions. An awareness and understanding of the effect and importance of movement gestures in day-to-day traffic interactions is needed for developers of autonomous vehicles to design forms of human-vehicle communication that are effective and scalable in multi-party interactions.
Conference Paper
Full-text available
This paper presents a study that aimed to identify the importance of eye contact and gestures between pedestrians and drivers. A video-based observation and coding was undertaken to categorize the road-crossing and communication behavior of pedestrians and drivers in busy traffic situations where efficient negotiation is necessary. The evidence in the study suggests that eye contact does not play a major role in manual driving, that explicit communication is rare to non-existent, and that motion patterns and behaviors of vehicles play a more significant role for pedestrians in efficient traffic negotiations.
Conference Paper
Full-text available
Previous work in human-centered design includes development of interfaces that improve driver effectiveness; however, interfaces designed to communicate to pedestrians based on a vehicle’s perceived intent are limited. For the present work, we investigated intent communication for autonomous vehicles by comparing the effectiveness of various methods of presenting vehicle-to-pedestrian street crossing information. A prototype externally-mounted forward-facing display was developed for vehicle-to-pedestrian communication, and an experiment was conducted in a naturalistic setting to compare signaling designs using a simulated autonomous vehicle. In the experiment, a van representing an autonomous vehicle presented information to pedestrians informing them when to cross a street. Participants made crossing decisions from two locations, a marked crosswalk and an unmarked midblock location. Individual differences, including age, gender, crossing location and conscientiousness were predictive of safe crossing decisions. Participant response times were analyzed to determine which display types resulted in the fastest and safest decisions. The results suggest pedestrians will rely on legacy behaviors rather than leverage the information on an external display. A large number of participants, however, believe additional displays will be needed on autonomous vehicles. The results of the experiment can be used to help inform future designs for vehicle-to-pedestrian communication.
Conference Paper
Full-text available
Autonomous vehicles will drive without human intervention and share the road with other road users such as pedestrians and human drivers. We understand autonomous vehicles as moving embodied agents; ubicomp systems whose behavior can be observed in the real world. The vehicles' ability to sense the outside world is improving rapidly but they lack human abilities to sense and interpret the subtle cues of other road users. Nevertheless, other road users may perceive them as acting intelligently when encountering them in traffic. The autonomous vehicle becomes part of a complex socio-technical system and has to interact with all these actors in a socially accepted manner. This opens up a challenging design space to be explored by observations and iterative prototyping.
Article
Full-text available
Previous research suggests that people tend to see faces in car fronts and that they attribute personality characteristics to car faces. In the present study we investigated whether car design influences pedestrian road-crossing behaviour. An immersive virtual reality environment with a zebra crossing scenario was used to determine a) whether the minimum accepted distance for crossing the street is larger for cars with a dominant appearance than for cars with a friendly appearance and b) whether the speed of dominant-looking cars is overestimated as compared to friendly-looking cars. Participants completed both tasks while either standing on the pavement or on the centre island. We found that people started to cross the road later in front of friendly-looking low-power cars compared to dominantlooking high-power cars, but only if the cars were relatively large in size. For small cars we found no effect of power. The speed of smaller cars was estimated to be higher compared to large cars (size-speed bias). Furthermore, there was an effect of starting position: From the centre island, participants entered the road significantly later (i. e. closer to the approaching car) and left the road later than when starting from the pavement. Similarly, the speed of the cars was estimated significantly lower when standing on the centre island compared to the pavement. To our knowledge, this is the first study to show that car fronts elicit responses on a behavioural level.
Chapter
Full-text available
Discussions of autonomous land vehicles often invoke the example of air traffic, where the autopilot is responsible for steering except for take-off and landing. The question arises: what can we learn from air traffic? What autonomously flying aircraft and autonomously driving vehicles have in common is that the pilot or driver bears the final responsibility. But, there are a number of differences between road traffic and air traffic (besides their type of locomotion) that make transferring the systems from one to the other impractical.
Patent
Full-text available
Aspects of the disclosure relate generally to notifying a pedestrian of the intent of a self-driving vehicle. For example, the vehicle may include sensors which detect an object such as a pedestrian attempting or about to cross the roadway in front of the vehicle. The vehicle's computer may then determine the correct way to respond to the pedestrian. For example, the computer may determine that the vehicle should stop or slow down, yield, or stop if it is safe to do so. The vehicle may then provide a notification to the pedestrian of what the vehicle is going to or is currently doing. For example, the vehicle may include a physical signaling device, an electronic sign or lights, a speaker for providing audible notifications, etc.
Article
Recognizing that the movement of cars on the road involves inherently social action, Nissan hired a team of social scientists to lead research for the development of autonomous vehicles (AVs) that engage with pedestrians, bicyclists, and other cars in a socially acceptable manner. We are expected to provide results that can be implemented into algorithms, resulting in a challenge to our social science perspective: How do we translate what are observably social practices into implementable algorithms when road use practices are so often contingent on the particulars of a situation, and these situations defy easy categorization and generalization? This case study explores how our cross-disciplinary engagements have proceeded. A particular challenge for our efforts is the limitations of the technology in making observational distinctions that socially acceptable driving necessitates. We also illustrate some of the significant successes we've already achieved, including the identification of road use practices that are translatable into AV software and the development of a concept, called the Intention Indicator, for how the AV might communicate with other road users. We continue to investigate road use to uncover and describe the ways in which the social interpretation of the world can enhance the design and behavior of AVs.
Article
They are coming, but not the way you may have been led to think