ArticlePDF Available

Human factors of automated driving systems: A compendium of lessons learned

Authors:

Abstract

On-road deployment of partial automation and testing of vehicles with higher levels of automated driving systems have been ongoing for several years. Recent research partly confirms what we already knew about user interaction with automation in aviation, and, interestingly, adds relevant information to our understanding of human operators' adoption of vehicle technology. In this study, we review key studies from the last quinquennial on driver interaction with partial and higher levels of automation, with the goal of providing a compendium for transportation professionals and legislators. In addition to providing a brief but necessary introduction of the Society of Automotive Engineers Taxonomy, we address research findings and human factors safety takeaways for partial automation. Our review shows that driver underload, lacking mental models, and driver training are key issues that merit further human factors investigation. In the latter part of the compendium, we also discuss recent findings and policy considerations on higher levels of automated driving systems. These include developing more transparent and comprehensive ways of reporting incidents and system disengagements, having protocols that help minimise safety risks during transitions of control, and implementing validation methods that help mitigate the safety risks of automated systems
Human Factors of Automated Driving Systems: A
Compendium of Lessons Learned.
Francesco N. Biondi, Ph.D., Balasingam Balakumar, Ph.D.
Human Systems Lab, University of Windsor, Ontario, Canada | hslab.org
Corresponding author. francesco.biondi@uwindsor.ca
1
1
2
3
4
5
6
7
1
2
ABSTRACT
On-road deployment of partial automation and testing of vehicles with higher levels of automated driving
systems have been ongoing for several years. Recent research partly confirms what we already knew about
user interaction with automation in aviation, and, interestingly, adds relevant information to our
understanding of human operators’ adoption of vehicle technology. In this study, we review key studies
from the last quinquennial on driver interaction with partial and higher levels of automation, with the goal
of providing a compendium for transportation professionals and legislators. In addition to providing a brief
but necessary introduction of the Society of Automotive Engineers taxonomy, we address research findings
and Human Factors safety takeaways for partial automation. Our review show that driver underload,
lacking mental models, and driver training are key issues that merit further Human Factors investigation. In
the latter part of the compendium, we also discuss recent findings and policy considerations on higher
levels of automated driving systems. These include developing more transparent and comprehensive ways
of reporting incidents and system disengagements, having protocols that help minimize safety risks during
transitions of control, and implementing validation methods that help mitigate the safety risks of automated
systems.
1 Introduction
The Society of Automotive Engineers (SAE) defines six levels of Automated Driving System (ADS), from
level-0, or fully manual driving, to level-5, or fully-automated driving (Figure 1). The ADS gradually takes
control of the vehicle lateral and longitudinal operations during the transition from manual to level 2 (L2)
driving, with the human driver being still in charge of monitoring the functioning of the system and
regaining manual control of the vehicle following unexpected system failures. Starting at level 3, the
human driver is no longer required to monitor the system or the vehicle, but needs to take control whenever
necessary (the human driver’s presence in the driving tasks is optional or nonrequired at levels 4 and 5,
respectively). For a comprehensive Human Factors discussion on the benefits and limitations of the SAE
taxonomy, see Biondi et al. (2019).
Figure 1: SAE levels of automation.
This compendium is a quick-start guide on the Human Factors of ADS that is intended for transportation
professionals and legislators who wish to familiarize themselves with the state-of-the-art research findings
and relevant policy takeaways on automated driving. Unlike the reviews by McDonald et al.(2019) and
Zhang et al. (2019), which focus on transitions of control between the human and the automated driver, this
compendium addresses the on-road and naturalistic Human Factors studies exploring the adoption of SAE
level 2, and level 3-and-above vehicles. In reviewing the literature, and in keeping with the scope of the
compendium, we focused on investigations that examined naturalistic or collision data, or measured
2
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
3
4
driver’s adoption of on-road ADS. The format of this compendium follows that of other similar
contributions (Biever et al., 2019; Biondi et al., 2019). The literature discussed in this study (Table 1) is
organized as following:
SAE level 2 research findings and safety takeaways
SAE level 3-and-above research findings and policy considerations
Authors Topic
Level 2 ADS
Biondi et al. (2017) Drivers’ subjective responses to driving an on-road level 2 ADS.
Biondi et al. (2018) Drivers’ behavioral and physiological responses to driving an on-road level
2 ADS.
Blanco et al. (2015) Human Factors Evaluation of Level 2 and Level 3 Automated Driving
Concepts
Gaspar et al. (2019) Naturalistic study on drivers’ interaction with an on-road level 2 ADS
Dunn et al. (2019) Meta-analysis on the use of mobile technology during level 2 ADS driving.
NTSB (2017) Investigation on the collision between a Car Operating with Automated
Vehicle Control Systems and a Tractor-Semitrailer Truck
NTSB (2018) Investigation on the rear-End Collision Between a Car Operating with
Advanced Driver Assistance Systems and a Stationary Fire Truck
NTSB (2020a) Investigation on the collision between a sport utility vehicle operating with
partial driving automation and a crash attenuator
NTSB (2020b) Highway Accident Brief : Collision Between Car Operating with Partial
Driving Automation and Truck-Tractor Semitrailer.
NTSB (2020c) Tesla Crash Investigation Yields 9 NTSB Safety Recommendations.
Level 3-and-above ADS
Biever & Angell (2019) Early lessons from level 5 ADS collisions
Campbell et al. (2018) Human factors design principles for level 2 and level 3 automated driving
concepts
California DMV (2020) California DMV SAE level 5 vehicle testing program dataset
Dixit et al. (2016) Analysis of the California DMV SAE level 5 vehicle testing program dataset
Favaro et al. (2019) Analysis of the California DMV SAE level 5 vehicle testing program dataset
GHSA (2019) Results from the Governors Highway Safety Association Automated Vehicle
Safety Expert Panel
NACTO (2016) National Association of City Transportation Officials Policy Statement on
Automated Vehicles.
NTSB (2019) Investigation on the collision between vehicle controlled by developmental
automated driving system and pedestrian,
Teoh & Kidd (2017) Analysis from the Google-Waymo level 5 vehicle testing program
Transport Canada (2019) Policy recommendations for testing Highly Automated Vehicles in Canada
USDOT (2018) Preparing for the Future of Transportation: Automated Vehicles 3.0
Table 1. Relevant literature discussed in the compendium sorted by SAE ADS level.
2 SAE level 2 vehicles research findings and safety takeaways
This section reviews current literature on Human Factors of L2 systems. We decided to focus on on-road
and naturalistic studies only as they provide the most reliable and valid evidence on real-world adoption of
L2 systems. As a result, simulated or survey studies are not reported here. Following the literature review,
we discuss key Human Factors safety risks that may require the attention of road safety policy makers.
3
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
5
6
2.1 Research findings
One of the foremost Human Factors challenges associated with L2 systems is the reduced ability of the
driver to stay engaged in the driving and monitoring tasks, or being on-the-loop, when the ADS is in
control of the vehicle. Strayer et al. (2020) suggest that when L2 is engaged, the driver incurs a gradual loss
in situation awareness, which is defined as the perception of elements in the environment within a volume
of time and space, the comprehension of their meaning, and the projection of their status in the near future
(Capallera et al., 2019; Endsley & Kaber, 1999). Such gradual but steady loss in situation awareness is the
direct result of a combination of human factors, including humans’ limited sustained attention to vigilance
tasks (Warm et al., 2008), lacking understanding of L2 systems’ capabilities and limitations (Sullivan et al.,
2016), and the tendency to over trust seemingly working automation (Parasuraman & Manzey, 2010; Koltai
et al., 2014).
Preliminary studies (Biondi et al., 2017, 2018) explored drivers’ use of L2 systems. In Biondi et al. (2017)
participants drove a Honda vehicle equipped with the Sensing Suite L2 system. Some participants reported
that their control of the vehicle was reduced, and that they started zoning-out toward the end of the drive. In
the follow-up exploratory study (Biondi et al., 2018), participants drove a Tesla Model S in manual and L2
mode. Relative to manual mode, drivers’ responses to peripheral events (vibrations presented on the
driver’s shoulder) were slower, and their physiological state, recorded as average heart rate and heart rate
variability, was lower during L2 driving. Both indicate possible symptoms of driver underload and reduced
engagement in the driving task. Similar findings were observed in Gaspar et al. (2019) wherein, relative to
manual driving, having the L2 system engaged resulted in overall longer single glances and longer eyes-
off-the-road time.
A recent meta-analysis (Dunn et al., 2019) analyzed driving behavior and errors from two naturalistic
studies wherein drivers drove vehicles equipped with L2 capabilities for an extended period of time. By
adopting a similar procedure to that used for the Second Strategic Highway Research Program (SHRP 2),
the authors found that speeding was relatively more prevalent during L2 driving. In addition, relative to
manual driving, driving with L2 systems engaged resulted in relatively higher odds of engaging in a visual,
or manual, or visual-manual secondary task. Note, however, that the low occurrence of performance errors
(~1.6%) in the dataset may limit the validity of the data.
Additional evidence of the Human Factors safety risks of L2 driving comes from the National
Transportation Safety Board (NTSB) investigations conducted on collisions involving vehicles operated in
L2 mode. In the reports on the 2016 first-ever fatal accident involving a Tesla Model S and a tractor trailer,
and the 2018 nonfatal collision involving a Tesla Model S and a fire truck, NTSB determined that driver’s
inattention due to overreliance on vehicle automation and the inefficient system operational design
constituted the probable causes of the accidents (NTSB, 2017, 2018). Likewise, following the two fatal
collisions on March 23, 2018, wherein a Tesla Model X crashed into a crash attenuator while the driver was
distracted by a cell phone game application and the vehicle was in L2 system mode, and on March 1, 2019,
wherein a Tesla Model 3 struck a tractor trailer while the level 2 system was engaged, NTSB determined
driver’s distraction, inattention, and overreliance on automation as the probable causes of these two
accidents (NTSB, 2020a, 2020b). NTSB (2020c) also noted the insufficient federal oversight of L2
systems, and the need for event data recording requirements for ADS. In addition, it listed nine safety
recommendations which included:
Evaluation of Tesla “Autopilot”- equipped vehicles to determine if the system’s operating
limitations, foreseeability of misuse, and ability to operate vehicles outside the intended
operational design domain pose an unreasonable risk to safety.
4
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
7
8
Collaborative development of standards for driver monitoring systems to minimize driver
disengagement, prevent automation complacency and account for foreseeable misuse of the
automation.
Development of a distracted driving lock-out mechanism or application for portable electronic
devices that will automatically disable any driver-distracting functions when a vehicle is in
motion.
The full list of recommendations is available at
https://www.ntsb.gov/news/press-releases/Pages/NR20200225.aspx
2.2 Human Factors safety takeaways
Based on state-of-the-art research, below we summarize key Human Factors safety risks and possible
solutions that may require attention by road safety policy makers.
Driver underload and disengagement. With L2 engaged, it is anticipated that drivers may be lulled
into a state of underload wherein their minds become gradually relieved from the workload of driving.
Literature suggests that this would be accompanied by greater trust toward the automated system, and
both would result in reduced engagement in the tasks of driving and supervising the functioning of the
automated system.
References: Biondi et al. (2019), Strayer et al. (2020), Biondi et al. (2017).
Driver distraction. Driver underload and overreliance toward the ADS are expected to increase the
risk of driver distraction, as well as engagement in visual, manual, or visual-manual secondary tasks.
This would result in longer or more frequent glances away from the road, and, in turn, reduced
responsiveness to key safety events.
References: Gaspar et al. (2019), Dunn et al. (2019), NTSB (2017, 2018, 2020a, 2020b).
Reduced responsiveness to transitions of control. Automation-induced distraction and, possibly,
drowsiness are expected to also reduce driver’s responsiveness to possible automated-to-manual
transitions of control, wherein the driver is required to take over control of the vehicle following L2
system disengagements resulting from inbuilt system limitations (e.g., inability to detect faded road
markings), or unexpected system failures.
Reference: Biondi et al. (2019).
Lacking mental models and driver training. Under L2 control, the human driver may mistakenly
believe that the L2 system is fully and permanently responsible for all driving operations. Literature
suggests that this may be the result of numerous factors including, e.g., deceitful manufacturer’s
advertising, and confusing system naming. All together these factors would contribute to drivers’
lacking or inaccurate understanding of system capabilities and limitations (also known as mental
models), and, in turn, improper use of these systems.
Reference: Sullivan et al. (2016).
Driver monitoring systems (DMS). DMS use vehicle (steering input) and/or driver data (eye/face
features extracted using cameras) to determine the state of the driver and infer on their ability to
operate the vehicle during or after transitions of control. Recent findings suggest that current DMS
5
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
9
10
may not be accurate in detecting the driver’s state, which in turn may lead to failures in detecting
driver distraction or drowsiness.
Reference: NTSB (2020c).
L2 system evaluation. NTSB recommends an evaluation of Tesla Autopilot-equipped vehicles be
performed to determine if the system’s operating limitations, foreseeability of misuse, and ability to
operate vehicles outside the intended operational design domain pose an unreasonable risk to safety.
Reference: NTSB (2020c).
3 Research findings on SAE level 3-and-above vehicles
With SAE level 3 vehicles in its early days of commercialization, and L4/5 adoption being largely limited
to testing, Human Factors research on systems level 3-and-above is sparse. Below we summarize
preliminary studies analyzing data from testing programs involving vehicles level 3-and-above, and current
Human Factors government recommendations.
3.1 Testing of SAE level 3-and-above vehicles
Favaro et al. (2019) analyzed publicly-available data and accident reports from the California DMV’s field
testing program (California DMV, 2010), wherein fifty companies (OEMs, tier-1 suppliers, etc.) conducted
L5 testing on public roads (more information available at
https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing). Preliminary findings indicated driver
discomfort and lack of trust toward the ADS, experienced while the system was operational, as key Human
Factors issues (in 20% of accident reports the human safety driver took over manual control of the vehicle
to limit safety consequences, and driver discomfort was the cause for manual disengagements in 26% of
cases). Additional data showed that the time to take over manual control of the vehicle following system
malfunctioning ranged from 0.83 to 1 second. However, unlike what was concluded in California DMV
( 2010) , Favaro et al. (2019) were unable to find a trust effect linking cumulative miles driven to longer
manual take-over time (Dixit et al., 2016, suggested that the more miles driven by the ADS, the greater the
safety driver’s trust toward it and, in turn, the longer their take-over times).
In their analyses of the data from the California DMV’s program, Biever et al. (2019) observed an increase
in the number of reported accidents over time, in part due to the increasing amount of miles driven by
vehicles (Figure 2).
Figure 2. Number of reported ADS collisions (2015–2018). Source: Biever et al. (2019)
6
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
11
12
Collisions wherein the vehicle was rear-ended accounted for 59% of all accidents. In most cases, it was
suggested that the following driver failed to anticipate the behavior of the vehicle or misjudged its
behavior. It is also plausible that because the system displayed an unusual, “non human-like” behavior, this
could have somehow violated the following driver’s expectations, therefore resulting in a failed or delayed
braking response. Authors also note that systems may identify threats that following drivers deem
nonthreatening resulting in the adoption of driving maneuvers which are difficult for the following
vehicle’s human driver to predict. Some collisions were caused by the malfunctioning of the on-board
sensing and decision-making systems, which, together with the notorious 2018 Tempe, AZ, fatal collision
involving the Volvo-Uber vehicle striking a pedestrian while the safety driver was distracted by her
personal cell phone (NTSB 2019), indicate the critical limitations of ADS. Several collisions were also
associated with the safety driver losing control of the vehicle following automated-to-manual transitions of
control. The authors note several key methodological limitations of the California DMV testing program
wherein only custom vehicles were operated by experts at given times and locations, and the accident
reporting practices differed from those utilized in the reporting of collisions involving manually-driven
vehicles.
Similar analyses were conducted by Teoh and Kidd (2017) who reviewed crash data from the
Google/Waymo test program. When comparing level 3-and-above system’s crash rates with comparable
human drivers’ data, no statistical differences were found between the two samples (despite this, authors
still concluded that these systems were safer than humans). However, similar to the findings by Biever et
al. (2019), the rate of rear-end collisions where the level 3-and-above vehicle was struck appeared to be
higher than in the sample where all the vehicles were human-driven (7.11 vs 2.65).
3.2 Government recommendations
The US Department of Transportation (2018) developed a plan outlining 12 key areas of safety design to be
addressed in the development of SAE level 3-and-above vehicles. These design regions include having
validation methods such as testing on a variety of closed roadways, test-tracks, and even simulations before
testing on an open road. Vehicles level 3-and-above should also include fallback conditions which have a
plan in place in case the automated features malfunction and alert the driver immediately, providing them
with instructions to safely operate the vehicle. When designing safety features, the manufacturers should
ensure that the systems are easy to use and the features and their intended purpose are clear to the operator
(Campbell et al. 2018). The design of these systems should ultimately help increase awareness and
knowledge of safety standards and should not introduce new sources of possible human error such as
distractions or difficult to use or understand features (Campbell et al., 2018).
Transport Canada (2019) also issued guidelines for testing highly automated vehicles in Canada. The
document lists responsibilities for federal, provincial, and municipal governments, and safety requirements
for trial organizations. Human Factors recommendations for governing bodies include: making sure that all
test vehicles are easily identifiable as such by other road users; modifying signage and road markers to aid
the detection by ADS; developing public education programs on systems safety issues. Transport Canada
also requires trial organizations to: provide necessary training to test drivers whose responsibilities include
understanding capabilities and limitations of the ADS and monitoring its functioning; reporting unplanned
ADS disengagements and serious incidents.
3.3 Policy considerations for SAE level 3-and-above vehicles
Based on the literature reviewed thus far, below we summarize relevant Human Factors policy
considerations for vehicles level 3-and-above.
7
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
13
14
Incident reporting and data recording. Organizations involved in vehicle testing should report
details of unplanned ADS disengagements, and accidents to relevant transportation agencies. Within its
testing program, the California DMV requires organizations to submit a report of traffic collision
involving an autonomous vehicle within 10 days after the collision, and an annual report summarizing
the disengagement of the technology during testing (California DMV, 2020). The Governors Highway
Safety Association (2019) also notes the necessity for organizations and manufacturers to make
vehicles’ event data recorders and other relevant data available to law enforcement for crash
investigation.
References: California DMV (2020), Transport Canada (2019), GHSA (2019).
Driver training. Preliminary research indicates that the human driver should be aware of the ADS’s
capabilities and limitations, and should understand its operational design. Similar to what is discussed
above for L2 systems, holding a correct mental model of the system is necessary to maintaining
situation awareness and monitoring of the system, and avoiding its misuse. Recent findings suggest
driver training could take place at the point of sale, through real-world or driving simulator studies, or
coaching being provided at the time of system use.
References: Sullivan et al. (2016), Transport Canada (2019), USDOT (2018), Campbell et al. (2018),
Blanco et al. (2015).
Human-machine interface (HMI) design. The system should inform the human driver about its
status via an intelligible and easy-to-use interior HMI. Recent studies have in fact shown that the
adoption of adaptive HMI drastically reduced driver’s response times during take-over maneuvers.
USDOT also encourages manufacturers to incorporate driver monitoring systems that ensure the
driver’s readiness to perform the driving task if necessary. European regulators mandated the
implementation of driver monitoring systems starting with 2022 vehicles. Interior HMI should
accommodate people with disabilities. Exterior HMI should also be designed to compensate for the
loss of verbal and nonverbal communication between road users at safety locations like 4-way stop
intersections or pedestrian crosswalks. This will aid road users to identify systems level 3-and-above as
such, and enable efficient communication and safe interaction between the vehicle and all road users.
References: Campbell et al. (2018), USDOT (2018), Transport Canada (2019), NACTO (2016).
Fallback strategies. In USDOT Vision for Safety 2.0, it is noted that organizations and manufacturers
must have a documented process for transitioning to a minimal risk condition when a problem is
detected or the system cannot operate safely. The system should be able to notify the human driver in
ways that enables the driver to regain control of the vehicle or allows the system to return to minimal
risk condition independently without the human driver’s intervention. Fallback strategies should take
into account that human drivers may be inattentive, under the influence of alcohol or other substances,
drowsy, or otherwise impaired. Fallback actions should be administered in manners that will facilitate
safe operation of the vehicle and minimize erratic driving behaviors.
Reference: USDOT (2018).
Validation methods. In USDOT Vision for Safety 2.0, it is suggested that, before implementation,
organizations and manufacturers should develop validation methods to appropriately mitigate the
safety risks of systems. Simulation, test track, and on-road testing should demonstrate the behavioral
competencies of the systems, their performance during crash avoidance situations, and the performance
of fallback strategies.
Reference: USDOT (2018).
8
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
15
16
4 Conclusions
This review is intended as a compendium for transportation professionals and legislators, providing the
state-of-the-art in Human Factors literature on level 2, and level 3-and-above systems. For level 2 systems,
recent on-road and naturalistic studies confirm that lacking or incomplete driver’s mental models
accompanied by a potentially higher risk of distracted driving may lead drivers to disengage from the
primary task of driving and, in turn, become less responsive to transitions of control. For systems level 3-
and above, transportation stakeholders encourage manufacturers to develop ways that minimize safety risks
during transitions of control, and help the reporting of incidents and system disengagements. The USDOT
also argues for the development of validation methods to help vet the safety of automated driving systems.
While some of the issues (loss in situation awareness, increase in distractibility, engagement in secondary
tasks) are known and find recognition in the broader literature on user automation, the studies being
reviewed here offer relevant information to the development of critical legislation. Also, despite this
compendium might fall short of a systematic review of the current literature on automated driving systems,
we believe that its quick-start guide format will prove a useful tool for transportation professionals and
legislators. We also see the compendium as a working document that will need updating as new research
findings and regulatory guidance come to fruition.
Acknowledgments. This research was made possible by funding from the Canadian Social Science and
Humanities Research Council and the Ontario Ministry of Transportation.
Keywords: Automated Driving Systems; Partial automation; SAE taxonomy; Human Factors; Automation.
REFERENCES
Biever, W., Angell, L., & Seaman, S. (2019). Automated Driving System Collisions: Early Lessons.
Human Factors. https://doi.org/10.1177/0018720819872034
Biondi, F., Alvarez, I., Jeong, K., Biondi, F., Alvarez, I., & Jeong, K. (2019). Human - System Cooperation
in Automated Driving. International Journal of Human–Computer Interaction, 00(00), 1–2.
https://doi.org/10.1080/10447318.2018.1561793
Biondi, F., Goethe, R., Cooper, J., & Strayer, D. (2017). Partial-autonomous Frenzy: Driving a Level-2
Vehicle on the Open Road. International Conference on Engineering Psychology and Cognitive
Ergonomics, 329–338. https://link.springer.com/chapter/10.1007/978-3-319-58475-1_25
Biondi, F. N., Lohani, M., Hopman, R., Mills, S., Cooper, J. M., & Strayer, D. L. (2018). 80 MPH and out-
of-the-loop : Effects of real-world semi-automated driving on driver workload and arousal .
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 1878–1882.
https://doi.org/https://doi.org/10.1177/1541931218621427
Biondi, F., Rossi, R., Gastaldi, M., Orsini, F., & Mulatti, C. (2020). Precision teaching to improve drivers’
lane maintenance. Journal of Safety Research, January. https://doi.org/10.1016/j.jsr.2019.12.020
Blanco, M., Atwood, J., Vasquez, H. M., Trimble, T. E., Fitchett, V. L., Radlbeck, J., .., & Morgan, J. .
(2015). Human Factors Evaluation of Level 2 and Level 3 Automated Driving Concepts. Human
Factors Evaluation of Level 2 and Level 3 Automated Driving, July.
https://doi.org/10.13140/RG.2.1.1874.7361
Campbell, J. L., Brown, J. L., Graving, J. S., Richard, C. M., Lichty, M. G., Bacon, L. P., Morgan, J. F., &
Sanquist, T. (2018). Human factors design principles for level 2 and level 3 automated driving
concepts. Highway Traffic Safety Administration, National Department of Transportation, August,
122. www.ntis.gov.
California Department of Motor Vehicles (2020). Testing of Autonomous Vehicles with a Driver.
Retrieved from https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing
Capallera, M., Khaled, O. A., Barbé-Labarthe, P., Mugellini, E., & Angelini, L. (2019). Convey situation
awareness in conditionally automated driving with a haptic seat. Adjunct Proceedings - 11th
9
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
17
18
International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications,
AutomotiveUI 2019, 161–165. https://doi.org/10.1145/3349263.3351309
Dixit, V. V., Chand, S., & Nair, D. J. (2016). Autonomous vehicles: Disengagements, accidents and
reaction times. PLoS ONE, 11(12), 1–14. https://doi.org/10.1371/journal.pone.0168054
Dunn, N., Dingus, T., & Soccolich, S. (2019). Understanding the Impact of Technology : Do Advanced
Driver Assistance and Semi- Automated Vehicle Systems Lead to Improper Driving Behavior ? (Issue
December). https://aaafoundation.org/understanding-the-impact-of-technology-do-advanced-driver-
assistance-and-semi-automated-vehicle-systems-lead-to-improper-driving-behavior/
Endsley, M. R., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness
and workload in a dynamic control task. In Ergonomics (Vol. 42, Issue 3).
https://doi.org/10.1080/001401399185595
European Union (2019). Europe on the Move. Retrieved from: https://s3-prod-europe.autonews.com/2019-
03/Car-safety-features-2019.pdf?adobe_mc=MCMID
%3D69563346360334074690401603193082171448%7CMCORGID
%3D138FFF2554E6E7220A4C98C6%2540AdobeOrg%7CTS%3D1603134741
Favarò, F. M., Eurich, S. O., & Rizvi, S. S. (2019). “Human” Problems in Semi-Autonomous Vehicles:
Understanding Drivers’ Reactions to Off-Nominal Scenarios. International Journal of Human-
Computer Interaction, 35(11), 956–971. https://doi.org/10.1080/10447318.2018.1561784
Gaspar, J., & Carney, C. (2019). The Effect of Partial Automation on Driver Attention: A Naturalistic
Driving Study. Human Factors: The Journal of the Human Factors and Ergonomics Society,
001872081983631. https://doi.org/10.1177/0018720819836310
GHSA. (2019). Automated Vehicle Safety Expert Panel: Engaging Drivers and Law Enforcement. 26p.
Retrieved from https://www.ghsa.org/sites/default/files/2019-08/AV Safety White
IIHS (2016). Driver trust in five driver assistance technologies following real-world use in four production
vehicles. 9588. DOI:https://doi.org/10.1080/15389588.2017.1297532
Koltai, K., Ho, N. T., Masequesmay, G., Niedober, D. J., Skoog, M., Johnson, W., Cacanindin, A., &
Lyons, J. B. (2014). An extended case study methodology for investigating influence of cultural,
organizational, and automation factors on human-automation trust. Conference on Human Factors in
Computing Systems - Proceedings, 885–888. https://doi.org/10.1145/2559206.2559974
McDonald, A. D., Alambeigi, H., Engström, J., Markkula, G., Vogelpohl, T., Dunne, J., & Yuma, N.
(2019). Toward computational simulations of behavior during automated driving takeovers: a review
of the empirical and modeling literatures. Human factors, 61(4), 642-688.
NACTO (2016). NACTO Policy Statement on Automated Vehicles. 1–4. Retrieved from http://nacto.org/
wp-content/uploads/2016/06/NACTO-Policy-Automated-Vehicles-201606.pdf
NTSB (2017). Collision between a Car Operating with Automated Vehicle Control Systems and a Tractor-
Semitrailer Truck, Williston, FL, May 7, 2016. Retrieved from
https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-BMG-abstract.pdf
NTSB (2018). Rear-End Collision Between a Car Operating with Advanced Driver Assistance Systems and
a Stationary Fire Truck, Culver City, California, January 22, 2018 Accident. 63rd Annu. Bus. Aviat.
Saf. Summit, BASS 2018, 161–179.
NTSB (2019). Collision between vehicle controlled by developmental automated driving system and
pedestrian, HWY18MH010, Tempe, Arizona. Retrieved from
https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1903.pdf
NTSB (2020a). Collision between a sport utility vehicle operating with partial driving automation and a
crash attenuator, HWY18FH011, Mountain View, California. 1–9. Retrieved from
https://www.ntsb.gov/investigations/AccidentReports/Pages/HAR2001.aspx
NTSB (2020b). Highway Accident Brief : Collision Between Car Operating with Partial Driving
Automation and Truck-Tractor Semitrailer. Retrieved from
https://www.ntsb.gov/investigations/AccidentReports/Pages/HAB2001.aspx
NTSB (2020c). Tesla Crash Investigation Yields 9 NTSB Safety Recommendations. 2017–2018. Retrieved
from https://www.ntsb.gov/news/press-releases/Pages/NR20200225.aspx
Parasuraman, R., & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An
Attentional Integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
SAE. (2018). J3016_201806. Taxonomy and Definitions for Terms Related to Driving Automation
Systems for On-Road Motor Vehicles. https://doi.org/10.4271/2012-01-0107.
10
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
19
20
Strayer, D.L., Douglas Getty, D. L., Biondi, F.N., & Cooper, J.L. (2020). The Multitasking Motorist and
the Attention Economy. In S. M. Lane & P. Atchley (Eds.), Human Capacity in the Attention
Economy. APA Press.
Sullivan, J. M., Flannagan, M. J., Pradhan, A. K., & Bao, S. (2016). Literature Review of Behavioral
Adaptations to Advanced Driver Assistance Systems. AAA Foundation for Traffic Research. (Issue
March).
Teoh, E. R., & Kidd, D. G. (2017). Rage against the machine? Google’s self-driving cars versus human
drivers. Journal of Safety Research, 63(September), 57–60. https://doi.org/10.1016/j.jsr.2017.08.008
Transport Canada. (2019) Testing Highly Automated Vehicles in Canada. Retrieved from
https://www.tc.gc.ca/en/services/road/safety-standards-vehicles-tires-child-car-seats/testing-highly-
automated-vehicles-canada.html
USDOT (2018). Preparing for the Future of Transportation: Automated Vehicles 3.0. retrieved from https://
www. transportation. gov/av/3.
Warm, J. S., Matthews, G., & Finomore, V. S. (2008). Vigilance, Workload, and Stress. In P. A. Hancock
& J. L. Szalma (Eds.), Performance under stress (pp. 115–141). Ashgate e-Book.
http://linkinghub.elsevier.com/retrieve/pii/S136984781730387X
Zhang, B., de Winter, J., Varotto, S., Happee, R., & Martens, M. (2019). Determinants of take.over time
from automated driving: A meta.analysis of 129 studies. Transportation Research Part F: Traffic
Psychology and Behaviour, 64, pp 285.307. https://doi.org/10.1016/j.trf.2019.04.020
11
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
21
22
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.