ArticlePDF Available

Optical Brain Imaging to Enhance UAV Operator Training, Evaluation, and Interface Development

Authors:

Abstract and Figures

As the use of unmanned aerial vehicles expands to near earth applications and force multiplying scenarios, current methods of operating UAVs and evaluating pilot performance need to expand as well. Many human factors studies on UAV operations rely on self reporting surveys to assess the situational awareness and cognitive workload of an operator during a particular task, which can make objective evaluations difficult. Functional Near-Infrared Spectroscopy (fNIR) is an emerging optical brain imaging technology that monitors brain activity in response to sensory, motor, or cognitive activation. fNIR systems developed during the last decade allow for a rapid, non-invasive method of measuring the brain activity of a subject while conducting tasks in realistic environments. This paper investigates deployment of fNIR for monitoring UAV operator’s cognitive workload and situational awareness during simulated missions. The experimental setup and procedures are presented with some early results supporting the use of fNIR for enhancing UAV operator training, evaluation and interface development.
Content may be subject to copyright.
J Intell Robot Syst (2011) 61:423–443
DOI 10.1007/s10846-010-9507-7
Optical Brain Imaging to Enhance UAV Operator
Training, Evaluation, and Interface Development
Justin Menda ·James T. Hing ·Hasan Ayaz ·
Patricia A. Shewokis ·Kurtulus Izzetoglu ·
Banu Onaral ·Paul Oh
Received: 1 February 2010 / Accepted: 1 September 2010 / Published online: 8 December 2010
© Springer Science+Business Media B.V. 2010
Abstract As the use of unmanned aerial vehicles expands to near earth applications
and force multiplying scenarios, current methods of operating UAVs and evaluating
pilot performance need to expand as well. Many human factors studies on UAV
operations rely on self reporting surveys to assess the situational awareness and
The U.S. Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick,
MD 21702-5014 is the awarding and administering acquisition office. This investigation was
funded under a U.S. Army Medical Research Acquisition Activity; Cooperative Agreement
W81XWH-08-2-0573. The content of the information herein does not necessarily reflect the
position or the policy of the U.S. Government or the U.S. Army and no official endorsement
should be inferred.
J. Menda (B)·H. Ayaz ·P. A. Shewokis ·K. Izzetoglu ·B. Onaral
School of Biomedical Engineering, Science & Health Systems, Drexel University,
3141 Chestnut Street, Philadelphia, PA, USA
e-mail: jm973@drexel.edu
H. Ayaz
e-mail: ayaz@drexel.edu
P. A. Shewokis
e-mail: shewokis@drexel.edu
K. Izzetoglu
e-mail: ki25@drexel.edu
B. Onaral
e-mail: banu.onaral@drexel.edu
J. T. Hing ·P. Oh
Department of Mechanical Engineering and Mechanics, Drexel University,
3141 Chestnut Street, Philadelphia, PA, USA
J. T. Hing
e-mail: jth23@drexel.edu
P. Oh
e-mail: paul@coe.drexel.edu
424 J Intell Robot Syst (2011) 61:423–443
cognitive workload of an operator during a particular task, which can make objective
evaluations difficult. Functional Near-Infrared Spectroscopy (fNIR) is an emerging
optical brain imaging technology that monitors brain activity in response to sensory,
motor, or cognitive activation. fNIR systems developed during the last decade allow
for a rapid, non-invasive method of measuring the brain activity of a subject while
conducting tasks in realistic environments. This paper investigates deployment of
fNIR for monitoring UAV operator’s cognitive workload and situational awareness
during simulated missions. The experimental setup and procedures are presented
with some early results supporting the use of fNIR for enhancing UAV operator
training, evaluation and interface development.
Keywords UAV safety ·UAV Training ·UAV pilot interface ·fNIR ·
Optical brain imaging
1 Introduction
The successful record of unmanned aerial vehicles (UAVs) in the military has fueled
a strong desire to increase their use, as well as to adapt these vehicles for civilian
applications. These expansions make the challenges of UAV operation, and the
speed and efficiency of training, more important than ever to address. Since UAV
accidents occur at a much higher rate than accidents in most manned aircraft (Fig. 1),
there is also a need to address the issue of UAV safety and performance. Historically,
the main contributing factors of UAV accidents have been associated with electro-
mechanical failures [1]. However, as the technology has matured and materials for
various UAV parts have improved, human error is increasingly becoming a main
Fig. 1 Accident rate of UAVs compared with manned aircraft. Data recreated from [7]
J Intell Robot Syst (2011) 61:423–443 425
factor in UAV mishaps [2]. The army classifies accidents into three causal categories:
human, material, and environmental [3]. Human causal factors are associated with
human error, and human error can be further broken down into Unsafe Acts, which
in turn are divided into the categories of Errors and Violations. Violations are
related to following rules and regulations. Errors can be decision errors, perceptual
errors, and skill-based errors [4]. Skill based errors can be attributed to a lack in
training for a specific condition/task resulting in poor execution such as over control
of the aircraft. Decision and perceptual errors are to some extent caused by a
lapse in situational awareness, which can result in inappropriate maneuvers, spatial
disorientation, and poor decisions. This classification scheme defines many ways in
which objective assessment of human factors, particularly cognitive factors, can be
instrumental in improving UAV operations.
Current techniques in UAV training and pilot evaluation can be somewhat chal-
lenging when addressing issues such as cognitive workload and situational awareness
during mission tasks or evaluating operator interfaces. Many of these types of studies
are based to some extent on self reporting surveys, such as the NASA Task Load
Index (NASA-TLX) [5]. While the NASA-TLX in particular has a good track
record for characterizing workload, self-reporting surveys in general inherently make
objective assessment of the cognitive workload for each subject difficult due to
differences in the level of expertise and personal opinions on what constitutes a high
mental workload from subject to subject. Recently, however, optical brain imaging
techniques have been developed that enable objective assessment of cognitive
workload [6]. If functional brain imaging can be used to monitor a UAV operator
during mission training, it could help the operators, crews, and commanders, as well
as facilitate research, to meet the demands in at least two ways:
1. Continuous, objective monitoring of an operator’s cognitive workload.
2. Assisting the development of pilot selection and training methods, interfaces,
and other technologies by providing a more direct, real-time measure of their
effects.
As users of UAVs move toward newer and untested applications, data about
operator cognitive workload and situational awareness becomes a very important
aspect of safe operations of UAVs. One example is the US ARMY’s desire to have
UAV operators control more than one UAV at a time, which may drastically increase
the operators’ cognitive workload. Proposed civilian applications such as search-and-
rescue, surveillance, transportation, communications, payload delivery and remote
sensing will extend UAVs beyond high altitude and passive interaction with the
environment to lower altitudes (near earth environments) and active interaction
with objects in the environment [8]. In near earth environments, obstacles are much
more commonplace than in higher altitudes where, for example, Predator systems
operate. These types of environments not only require high situational awareness
from the pilot but also quick reflexes to account for obstacles in three dimensions.
The dynamic nature of near earth environments will also lead to higher probabilities
of rapidly changing mission plans. This new shift in the operation of UAVs will
require changes not only in the way UAV pilots are trained, but also in the way
that UAVs are currently operated, such as changes to the operator interface.
To address the aforementioned challenges, there have been a number of studies
to evaluate the situational awareness and cognitive workload requirements for
426 J Intell Robot Syst (2011) 61:423–443
operators of teleoperated systems [911].ThegoalofaUAVpilotinterfaceisto
provide tools to the human operator for decision making, generating commands and
perception of the operating environment. This perception is known as situational
awareness (SA). The accepted definition of SA comes from [12] where it is broken
down into three levels. Level 1 SA is the perception of the elements in the operating
environment within a volume of time and space; Level 2 SA is the comprehension
of the meaning of those elements; Level 3 SA is the projection of their status in
the near future. Certainly, most interfaces are designed to try to maximize operator
situational awareness (SA) while minimizing cognitive workload, two goals that are
often mutually exclusive; the former requires the pilot to have more information,
which may compromise the latter goal if the information is not presented in a highly
intuitive manner. This effect is made both more serious and more urgent by the
physical separation of the operator from the vehicle, some of the implications of
which are discussed in Section 4below. Thus, there are several unique challenges
inherent to UAV flight, and all of them affect the operator’s cognitive workload.
Low SA can be thought of as requiring higher cognitive activity to compensate for
the lack of intuitive cues, and complex mission scenarios inherently involve high
cognitive workload. Adding some measure of brain activity to the selection, training,
and operation of UAV pilots, as well as the development of new interfaces and
interface elements, could greatly improve the resolution of any assessments involved
therein.
This paper describes initial studies to assess the potential of functional near-
infrared (fNIR) technology as a relatively direct, noninvasive, real-time brain imag-
ing method that could turn any UAV pilot training or operating environment into
a “brain-in-the-loop” system with minimal additional complexity and cost. Such a
technology could have tremendous benefits to facilitate the rapid deployment of
UAVs and significant improvement in their safety and performance.
Section 2of this paper details important aspects of the functional Near Infrared
(fNIR) neuroimaging technology. Section 3discusses how the fNIR system can be
applied to evaluate pilot performance during an example UAV search mission.
Section 4builds on those results and presents how fNIR can be used in the evaluation
of novel UAV piloting interfaces. Section 5concludes this paper with a discussion
and presentation of future work.
2 The fNIR System
Functional near-infrared spectroscopy (fNIR) is a neuroimaging modality that en-
ables continuous, noninvasive, and portable monitoring of changes in blood oxy-
genation and blood volume related to human brain function [6,13,16,17,26,27,34].
fNIR technology uses specific wavelengths of light, introduced at the scalp (LED
emitters are employed in the version discussed herein), to enable the noninvasive
measurement of changes in the relative ratios of deoxygenated hemoglobin (deoxy-
Hb) and oxygenated hemoglobin (oxy-Hb) in the capillary beds during brain activity.
Current fNIR technology allows the construction of full-forehead sensors containing
all the required LED emitters and photodetectors embedded in a thin, flexible pad;
further miniaturization, including a fully self-contained wireless system, is under
development. Data acquisition, transmission, and recording for a full-forehead sen-
J Intell Robot Syst (2011) 61:423–443 427
Fig. 2 Bottom Block diagram of the fNIR sensor system. Top flexible sensor housing containing 4
LED sources and 10 photodetectors. Reprinted from [13]
sor may currently be performed using widely available and very compact computer
systems, equivalent in power to an inexpensive personal computer. Bandwidth
requirements are easily met by USB 2.0 or current high-speed wireless standards.
These technologies allow for the design of portable, safe, affordable, noninvasive and
minimally intrusive monitoring systems that monitor frontal cortical areas supporting
executive functions (attention, working memory, response monitoring)—in essence,
turning any training, testing, or operational environment into a “brain-in-the-loop”
system with minimal added complexity (Fig. 2).
3 UAV Pilot Evaluation
3.1 Simulated UAV Mission: Search Task
As a first step, an integrated simulation environment was constructed to allow novice
pilots to operate a simulated MQ-1 Predator UAV. Missions and scenarios were
developed to represent a variety of tasks typical of UAV training and Predator
operation, such as visual search/target categorization tasks and flight maneuvers.
Subjects attempt to complete the scenarios’ objectives as data is gathered both from
within the simulation and by the fNIR sensor to measure their performance. Results
presented in this section are from initial trials with this simulation system, which
involved a visual search/vigilance task. The goal is to assess the sensitivity of fNIR to
428 J Intell Robot Syst (2011) 61:423–443
high and low cognitive workload states during UAV flight. Currently ongoing studies
with other kinds of flight scenarios and slightly different goals will be discussed in the
Future Work section.
3.2 Experimental Setup
Given the preliminary nature of this study, it was deemed sufficient to use novice
civilian pilots and approximate a UAV operator’s station using widely available
and versatile off-the-shelf hardware and software. Thus, a Microsoft Windows-based
PC system was constructed to use Microsoft Flight Simulator X as the simulation
software (Fig. 3).
Microsoft Flight Simulator X is a readily available flight simulation program with
current support from the company as well as from a large worldwide community
of developers, both for the program itself and its SDK. It takes full advantage of
modern computer hardware to deliver a realistic simulation of every aspect of flight
with a variety of aircraft and under a wide range of conditions. Flight Simulator X
does not come with a UAV model, so we used the Predator add-on by First Class
Simulations. This add-on provides a realistic aircraft model of the MQ-1 Predator
UAV, as well as a realistic interface (based on publicly available information) that
can be adapted in many ways. We adapted the multiple displays from the typical
vertically stacked configuration to a horizontal arrangement to better suit the multi-
monitor configuration we used (Fig. 4).
To run the simulation reliably and with a high degree of realism, high performance
hardware was specified, including an Intel Core i7 925 CPU and an nVidia GeForce
GTX 280 graphics processor. The simulation is presented on a triple-display system
by Digital Tigers, using 19 LCD monitors with 4:3 aspect ratios in a horizontal
configuration. Subjects control the simulated Predator UAV using a Thrustmaster
HOTAS Cougar joystick-and-throttle system and a CH Pro Pedals rudder pedal
system, both selected for their ease of implementation as well as the manufacturers’
collaborations with government agencies and industrial corporations.
Fig. 3 Subject operating the
Predator UAV simulator
with fNIR sensor attached
and data acquisition apparatus
on far right
J Intell Robot Syst (2011) 61:423–443 429
Fig. 4 Screenshot of flight simulation interface, prepared using Microsoft Flight Simulator X and
First Class Simulations’s Predator UAV add-on, customized for the simulation system in use
FS Recorder, an add-on for Flight Simulator X, was implemented to record
behavioral data during the simulated flights. This data includes the position and
speed of the aircraft, as well as control surface positions, and is recorded to measure
performance and for comparison with the fNIR data. In cooperation with FS
Recorder’s creator, we developed an application that sends signals to the fNIR data
acquisition computer over RS232, based on events in the simulation. The fNIR data
acquisition software receives these signals and places time-stamped markers on the
fNIR data accordingly, allowing synchronization of the behavioral data gathered
from the simulation with fNIR data.
3.3 Experiment Procedure
Prior to the study, all participants signed informed consent statements approved by
the Human Subjects Institutional Review Board at Drexel University and by the U.S.
Army Medical Research and Materiel Command (USAMRMC), Office of Research
Protections (ORP), Human Research Protection Office (HRPO).
Several highly challenging flight scenarios have been designed, representing a
variety of tasks required of UAV operators (coordinate-based navigation, landing,
visual search/target categorization, etc.) and incorporating a variety of workload
factors (e.g. crosswinds, cloud cover, fuel constraints, etc.). After an “introduction”
session for the purpose of familiarization with the protocol and simulation, subjects
fly these scenarios during eight subsequent flight sessions. Each subject performs one
flight session per day, each lasting approximately two hours, for a total of 18 hours
over 9 days per subject.
To determine each subject’s cognitive baseline, an initial period of 30 s to 1 min
of simple resting with eyes closed, another period of 30 s to 1 min of simple resting
with eyes open, and an additional period of 5 min with a Psychomotor Vigilance Task
(PVT) [14] are performed at the beginning of each session in order to gather fNIR
data that can be used to contrast with active flight simulator operation periods.
In the first session, after being given an overview of the experiment and providing
informed consent, each subject completes the Edinburgh Handedness Inventory [15]
and a brief questionnaire regarding previous flight and video game experience. Then,
the fNIR headware is attached and the aforementioned baseline procedures are
performed. After baseline procedures are complete, subjects perform an intro flight
of up to 1.5 h, during which they are introduced to the UAV simulation, given an
overview of the interface, and then required to complete a guided tutorial session
430 J Intell Robot Syst (2011) 61:423–443
in order to develop familiarity with the simulation and the very basic proficiencies
required. Only the subjects who are able to complete this session by demonstrating
the required proficiencies by the end of the session are permitted to continue.
In sessions 2 through 9, each subject attempts one or more of the flight scenarios,
with the fNIR device attached and gathering data during the flight. At the end of each
session, a confidence survey and the NASA-TLX are administered to allow subjects
to self-rate overall performance. Some experimental protocols involve repetition of
certain brief scenarios several times in each flight session; for these, subjects are
additionally asked to rate their own performance on each repetition on a scale of
1 to 10.
3.4 Preliminary Results of Validation Study
For validation of the fNIR data with our early study on target categorization
(attention tasks), preliminary analysis was performed on the functional brain imaging
data while subjects performed a visual search/target categorization task. In this
protocol, each subject performed one scenario per session in which they were asked
to fly along a coastline and search for a submarine, knowing only that the submarine
will be just offshore. When they noticed the submarine, subjects were asked to press
a button that caused a time-stamped marker to be sent to the fNIR data acquisition
software. To date, five healthy subjects with no neurological or psychiatric history
(ages 18 to 35) voluntarily participated in the study. These data compare average
oxygenation changes before and after detection of the submarine.
Block analysis was used to identify fNIR data corresponding to initial eyes-
open and eyes-closed rest periods, PVT tasks, and UAV simulation time. Time
synchronization markers that indicate key events during flight were time-correlated
with fNIR data. A linear phase, finite impulse (FIR) low pass filter with a cut-
off frequency of 0.2 Hz was applied to the 16-voxel raw fNIR data to eliminate
high frequency noise. For oxygenation calculations, a modified Beer Lambert Law
Fig. 5 Normalized average HbT in all 16 voxels from 5 subjects in the target search task. Pink
represents average HbT in the 100 s before reported target sighting; blue represents the same
measure for 100 s after reported target sighting
J Intell Robot Syst (2011) 61:423–443 431
Fig. 6 Average normalized
HbT values for voxel 4 before
and after submarine sighting
with standard error bars
was applied to the data to calculate oxy-hemoglobin and deoxy-hemoglobin con-
centration changes. fNIR signal data were then averaged over 100 s before and
after each subject indicated locating the submarine in each trial. The averages of
total hemoglobin (HbT) concentration changes were calculated for pre- and post-
blocks and normalized using z-score calculation for each pair independently. Figure 5
displays average total hemoglobin concentration changes over all recorded trials
from five subjects. Several trials had to be discarded due to minor technical issues
affecting signal quality, resulting in a total of 5 trials per subject. For two subjects,
only four trials were available for certain voxels. We imputed the mean value for the
other trials to each missing voxel. Separate 2 X 5 (Block-Pre; Post X Trials) repeated
measures ANOVAs on both factors were calculated for voxels 2 and 4. Voxels 2 and
4 were significant; for the main effect of block for voxel 2, F(1,16) =23.67, p<0.01,
and for voxel 4, F(1,16) =53.25, p<0.01. This is in-line with our previous findings
that reported a relationship of higher oxygenation in this cortical area with higher
cognitive workload [16,17]. This preliminary result indicates that fNIR is sensitive
to differences between high and low cognitive workload states in a task relevant to
UAV operation (Fig. 6).
The location of voxel 4 is illustrated on the brain’s surface below [18](Fig.7).
4 UAV Pilot Interface Evaluation
As the appeal and proliferation of UAVs increase, they are beginning to encounter
environments and scenarios for which they were not initially designed [19]. As such,
changes to the way UAVs are operated, specifically the operator interface, are
being developed to address the newly emerging challenges. An increasingly popular
approach is the introduction of multimodal interfaces which allow for increased
visual information, such as mission plan, virtual 3 dimensional views, and multiple
control modes based on the scenario the UAV is encountering [20]. While the
increased amount of data has been shown to improve situational awareness, it
432 J Intell Robot Syst (2011) 61:423–443
Fig. 7 Location of voxel 4
on the brain surface
comes at the cost of increased cognitive workload of the operator [10]. The visual
scanning between different display windows can also cause operators to rely and
focus attention on only one part of the display. This is known as cognitive tunneling
[21]. The application of fNIR during operator trials with new UAV interfaces may
be able to produce an objective assessment of operator workload, thus providing a
vital source of input for evaluating and improving the interface.
There are many challenges to face when designing new UAV interfaces and trying
to incorporate high situation awareness and telepresence for a UAV pilot. For one,
the pilot is not present in the remote vehicle and therefore has no direct sensory
contact (kinesthetic/vestibular, auditory, smell, etc.) with the remote environment.
The visual information relayed to the UAV pilot is usually of a degraded quality
when compared to direct visualization of the environment. This has been shown to
directly affect a pilot’s performance [22]. The UAV pilot’s field of view is restricted
due to the limitations of the onboard camera. The limited field of view also causes
difficulty in scanning the visual environment surrounding the vehicle and can lead
to disorientation [22]. Colors in the image can also be degraded which can hinder
tasks such as search and targeting. Different focal lengths of the cameras can cause
distortion in the periphery of images and lower image resolution, affecting the pilot’s
telepresence [23]. Other aspects causing difficulties in operations are large motions in
the display due to the camera rotating with UAV and little sense of the vehicle’s size
in the operating environment. This knowledge is highly important when operating in
cluttered environments. Data lag in the video images as well as control commands
also leads to increased task completion times and, in some cases, uncontrolled
operation [24].
Prior research from the authors [25] has introduced a mixed-reality chase view
interface for UAV operations in near earth environments to address many of these
J Intell Robot Syst (2011) 61:423–443 433
Fig. 8 Onboard camera view
from a teleoperated rotorcraft
flying in a near earth
environment
issues. A standard onboard camera view is shown in Fig. 8, which demonstrates many
of the challenges described earlier. The chase view interface (seen in Fig. 9) is similar
to a view from behind the aircraft. It combines a real world onboard camera view with
a virtual representation of the vehicle and the surrounding operating environment.
Figure 9demonstrates the successful implementation of the interface in a real world
UAV search scenario.
A similar approach was employed by Drury et al. [31], based on their continuing
work to define and evaluate situational awareness in unmanned vehicle operations
(see [32]). Their findings indicated that the augmented display improved compre-
hension of spatial relationships between a UAV and elements of the environment
in observational tasks. Similar results were suggested by Cooper et al. [33], namely
that a mixed reality system may improve an operator’s ability to localize targets in
visual search tasks. Although the tasks involved in these studies were comparable to
those performed by operators of high-altitude UAVs, improved comprehension of
spatial relationships and object localization are highly relevant to UAV operations
Fig. 9 The mixed reality chase
view interface. The real world
onboard camera view from
the UAV is rotated so that the
horizon is level. Surrounding
the real world image is a
virtual representation of the
flight environment which
augments the field of view.
Also integrated into
the display is a virtual
representation of the size
and pose of the UAV in
the environment
434 J Intell Robot Syst (2011) 61:423–443
at low altitudes and in urban/cluttered environments. The present work aims to
assess whether a mixed-reality interface can provide comparable benefits in those
environments.
We make use of an indoor gantry system, also presented in [25], which was
developed as a means to evaluate factors relevant to UAV operations in near-earth
environments. The indoor gantry was used to safely test and evaluate the chase
view interface using different pilots and mission scenarios without the risk of costly
crashes. Early results of indoor gantry trials showed an improvement in pilot control
and precision positioning of an aircraft using the chase view interface as compared
with a standard onboard camera view. These results supported the integration of
fNIR to study the cognitive workload of the subjects during trials.
4.1 Mixed-Reality Piloting System: Apparatus
Many of the major details of the setup used in this work are similar to the setup
described in [25]. It involves the use of a large indoor 6 degree of freedom gantry
system (seen in Fig. 10) that has real world UAV sensors on the end effector. Inside
the gantry workspace is a recreation of a real world flight environment. The dynamics
of the end effector are driven by the output from a flight simulation program running
on a PC. For the tests detailed in this work, the aircraft used is a model of the
Mako UAV from NAVMAR Applied Sciences. The Mako UAV is a fixed wing
aircraft weighing 140 lbs with a 13-ft wingspan that is currently one of the UAV
models in operations in Iraq. While the authors believe rotorcraft are more suitable
Fig. 10 The arm to which the
aircraft is attached is the
gantry end effector
J Intell Robot Syst (2011) 61:423–443 435
Fig. 11 Block diagram of the indoor 6DOF gantry system showing the integration with the flight
simulator (X-Plane) and the UAV sensors
for near earth operations due to hovering capabilities, early studies with the newly
developed UAV pilot interface are accelerated by conducting studies with the more
intuitive controls of a fixed wing aircraft for beginner pilots. A block diagram of
the system can be seen in Fig. 11; more details can be found in [25]. As seen in the
block diagram, the pilot views the interface and uses a joystick to command throttle,
rudder, aileron and elevator positions to the aircraft. The flight simulator calculates
the resulting aircraft dynamics and sends the positions to the gantry to move the end
effector. These positions are scaled to increase the amount of usable workspace in
the gantry. The yaw, pitch and roll angles of the aircraft are fed into a 3 degrees of
freedom (3 dof), pan, tilt, roll unit that carries the UAV sensors and is attached to
the end effector of the gantry. Data from the sensors, specifically the onboard camera
images for this work, are sent back to the pilot interface computer and are used to
create the onboard camera view and chase view. The chase view interface has been
improved from prior work by adding an alpha blending to the boundary between
the real world camera view and the surrounding virtual environment. This decreases
the distraction caused by the high contrast at the boundary of the real world camera
images. Integration of the fNIR system and changes to the gantry environment, as
well as new experiment protocols, are highlighted in this subsection (Table 1).
The gantry environment as seen in Fig. 12 consists of two flight levels. The lower
level contains corridors and 2 tall pole shaped obstacles. The upper level contains
a series of colored spherical fiducials attached to the top of the corridor walls and
obstacles.
Table 1 Selected
specifications of the
6DOF gantry system
Parameter Value Units
X-axis range (length) 17.72 ft
Y-axis range (width) 13.84 ft
Z-axis range (height) 5.43 ft
X, Y, Z-axis speed 2 ft/s
X-axis acceleration 2.95 ft/s2
Y-axis acceleration 3.69 ft/s2
Z-axis acceleration 7.40 ft/s2
436 J Intell Robot Syst (2011) 61:423–443
Fig. 12 Left Top down view of the full scale replication of the gantry environment in simulation.
Right Top down view of the 1:43.5 scale environment inside the gantry workspace. Color markers for
the upper level task are represented by blue,red and green spheres
4.2 Procedure
To assess the efficacy of the two conditions, eleven laboratory personnel volunteered
to test the conditions and to finalize the methodology. Six operated under the exper-
imental (chase view) condition and five operated in the control (onboard camera)
condition. Figure 13 shows the onboard camera view and chase view interface used
in the experiments.
Seven sessions were used in this methodological testing, of which six were actual
flight sessions. The fNIR sensor was placed on each participant’s forehead during all
six flight sessions.
Fig. 13 Left Onboard camera view during flight through the gantry environment. Right Mixed reality
chase view interface during flight through the gantry environment
J Intell Robot Syst (2011) 61:423–443 437
Each flight was preceded by an eyes-closed rest period of 20 s to provide baseline
fNIR data.
In the first session, after review of the methodology and processes, each person
completed the Edinburgh Handedness Inventory [15] and a brief questionnaire
regarding previous flight and video game experience. Prior to testing the conditions,
each person was given a 15-min introduction and free-flight session to become
familiar with the dynamics of the aircraft and the flight controller.
For sessions 2 through 7, subjects performed simulated flights with three perfor-
mance goals. The first goal was to fly through the test environment while maintaining
a safe distance from the corridor walls and obstacles. The second goal was to correctly
fly in the appropriate directions around obstacles placed inside the environment.
Prior to the flight, each individual was told in which direction to fly around each
obstacle (e.g. to the right of the first, to the left of the second). The final goal was to
fly over the color targets in a specific order as indicated prior to the flight. At the end
of each session, the NASA-TLX was completed [5].
During each session, four different flight paths through the obstacle course would
be performed with the fNIR device attached to the testee’s forehead. Information
on events within the simulation, the aircraft’s attitude, altitude, and speed, as well as
data on the motion of the controllers, were recorded for later assessment with the
fNIR data.
4.3 Preliminary Results
As mentioned previously, the chase view interface was shown in prior work to im-
prove pilot behavioral performance. Behavioral performance comes from data such
as the performer’s accuracy in positioning the aircraft. The behavioral results from
this methodological testing also showed improved pilot behavioral performance.
An example can be seen in Fig. 14, which displays a measure termed “marker
error:” the distance from the nearest point of the flight path to the center of the
marker during the upper level flight task. In this respect, Chase view subjects had
Fig. 14 Marker error is the
distance from the nearest point
of the flight path to the center
point of the marker during
upper level flight
0.0
2.3
4.6
6.9
9.1
11.4
13.7
16.0
Chase Onboard
Marker
Error (m)
438 J Intell Robot Syst (2011) 61:423–443
significantly lower error than Onboard view subjects. This result shows that chase
view subjects had greater accuracy when positioning the aircraft over the markers.
This was attributed to the increased awareness of the location of the aircraft within
the flight environment that the chase view permits. However, previously unstudied
were the effects of the chase view interface on the cognitive workload of the subjects.
Table 2summarizes the parameters and results for the analysis of key behavioral
and cognitive measures. For the marker error data, non-parametric Mann–Whitney
U-tests with two-tailed exact probabilities Z =2.315, p=0.02 were calculated.
Given the preliminary nature of the data, we conducted Monte Carlo permutation
tests with 10,000 replications and bootstrapped 95% confidence intervals of the
Monte Carlo simulations. For the cognitive data, the NASA-TLX gave a subjective
workload assessment for each subject and each session. Chase and Onboard views
were compared for each of the variables [adjusted weight rating, mental demand]
using a Mann–Whitney U test nonparametric test (p<0.05 for significance) to
assess differences between the Onboard view and Chase view groups’ subjective
workloads. Similar to the behavioral data, Monte Carlo permutation tests (10,000
replications) and bootstrapped 95% confidence intervals of the Monte Carlo tests
were calculated. The hemodynamic response features from the fNIR measures (i.e.,
mean and peak oxy-Hb, deoxy-Hb, oxygenation) were also analyzed. The fNIR
measurements were first cleaned of motion artifacts [26]. A linear phase, finite
impulse (FIR) low pass filter with a cut-off frequency of 0.2 Hertz was applied to
the 16-voxel raw fNIR data for each subject to eliminate high frequency noise. For
oxygenation calculations, a modified Beer–Lambert Law was applied to the data to
calculate oxy-hemoglobin and deoxy-hemoglobin concentration changes. Analysis
was run on all subjects. Mann–Whitney U nonparametric tests with View as the
grouping variable (with onboard and chase as the levels) was performed across the
flights to determine if there were median differences in mean and peak oxygenation
for the voxels (α=0.05). In addition, we conducted Monte Carlo permutation tests
(10,000 replications) with bootstrapped 95% confidence intervals to estimate the
measures and variability for future work.
The cognitive workload hypothesis was that the onboard camera view would result
in a higher task workload and mental demand of the subject due to the increased
need to mentally map and predict the aircraft position using the onboard camera
perspective. The NASA-TLX results are shown in Figs. 15 and 16. When comparing
the overall task load score and the mental demand score between Chase view and
Onboard view, no statistically significant differences were found (pvalues were
Table 2 Parameters and results of Mann–Whitney U and Monte Carlo permutation tests for key
behavioral and cognitive measures
Dependent measure Mean Exact Monte Bootstrap Bootstrap Bootstrap
difference probabilities Carlo mean lower upper
Z=(p-value) p-value limit 95% limit 95%
Marker error 0.551 2.315 (0.021) 0.019 0.554 0.077 1.104
Mental demand 72.359 0.854 (0.393) 0.004 72.524 17.387 122.903
Weighted rating 2.801 1.634 (0.102) 0.440 2.857 10.069 4.296
Mean oxygenation 0.386 2.001 (0.045) 0.020 0.387 0.387 0.059
Max oxygenation 0.386 2.367 (0.018) 0.029 0.385 0.725 0.046
J Intell Robot Syst (2011) 61:423–443 439
Fig. 15 Task load index
weighted rating across
sessions. Onboard view
subjects left, chase view
subject right
0.395 for the overall score and 0.103 for the mental demand score). In Table 2,
task weighted workload shows a significant difference for the permutation tests
(p=0.004) along with bootstrapped confidence interval width not containing zero.
Consequently, additionally testing with a sufficient sample size, as well as tasks
that are more cognitively demanding, may permit the NASA-TLX to sufficiently
distinguish between the onboard camera and chase views.
While the subjective tests showed no significance, the fNIR analysis showed
otherwise. The difference of average oxygenation changes for all Chase and Onboard
view groups were found to be significant (z =−2.001, p<0.045). Onboard view was
found to be significantly higher than Chase view. These results are shown in the top
portion of Fig. 17.
Fig. 16 Mental demand
across sessions. Onboard
view subjects left,chase
view subject right
440 J Intell Robot Syst (2011) 61:423–443
Fig. 17 Average oxygenation
changes for chase and onboard
view subjects. For comparison
of the oxygenation changes,
signal level is important. Top:
Average oxygenation changes
for chase view and onboard
view group. Plot shows
onboard view group’s levels
are higher. Bottom Maximum
oxygenation changes for chase
view and onboard view groups.
Plot shows onboard view
group’s levels are higher
The difference of maximum oxygenation changes for chase view and onboard view
groups were found to be significant (z =−2.367, p <0.018). Figure 17, bottom, shows
that Onboard view group had higher maximum oxygenation change when compared
with the Chase view group.
These comparisons were performed on voxel four. As stated earlier in this
paper, activation in the brain area corresponding to voxel four has been found
to be sensitive during completion of standardized cognitive tasks dealing with
concentration, attention, and working memory [6,26,27]. Higher oxygenation in
this area is well correlated with higher cognitive workload. Chase subjects showed
considerably lower average oxygenation levels in voxel four than Onboard subjects
did, indicating that the onboard camera view produced higher cognitive workload.
This result is most likely attributable to the narrower viewable angle and rolling of
the environment in the onboard view, which require more cognitive processing by
the subject to construct an accurate working mental model of the environment and
the aircraft’s position in it.
J Intell Robot Syst (2011) 61:423–443 441
5 Conclusion and Future Work
5.1 Conclusion
fNIR is a portable, safe and cost-effective neuroimaging technique that allows
measurement of brain activity of human subjects in real-life environments. This
makes fNIR a perfect candidate for measuring brain activity of UAV operators and
tracking their cognitive workload and situational awareness during simulated and
real-life tasks, which could provide a crucial objective assessment of those factors.
Experiment setup and procedures are in place for testing, and preliminary results
suggest fNIR can be successfully deployed.
Preliminary trials in the Pilot Evaluation studies indicate that fNIR is sensitive to
high and low cognitive workload states of a UAV operator in a visual search/vigilance
task. Meanwhile, results from the Interface Development study indicate that the
fNIR signal is responsive to factors that are relevant to situational awareness
and affect UAV operator performance and safety. If these results are validated and
elaborated by future studies, fNIR could become a powerful tool in monitoring and
improving human factors in UAV operations.
5.2 Future Work
There is considerable evidence that task practice and the corresponding increase in
skill is correlated with changes in the extent or intensity of activations, particularly
in the attentional and control areas—prefrontal cortex (PFC), anterior cingulate
cortex (ACC) and posterior parietal cortex (PPC) [28]. This finding is true whether
the task is primarily motor (e.g., a golf swing [29]) or primarily cognitive in nature
(e.g., the Tower of London problem [30]). Both practice and the development
of expertise (the latter of which includes individual differences in performance)
typically involve decreased activation across attentional and control areas, freeing
these neural resources to attend to other incoming stimuli or task demands. As
such, measuring activation in these attentional and control areas relative to task
performance can provide an index of level of expertise. Thus, since fNIR can easily
monitor PFC areas, it may be able to monitor the progress of training and the
transition from novice to expert. As a first step toward investigating this idea, a new
protocol in the pilot evaluation study has recently been initiated to monitor subjects
with fNIR as they practice brief and demanding flight maneuvers and navigation
tasks.
Further, a measure of expertise based on brain imaging techniques monitoring
the attentional and control resources the individual must utilize to maintain that
level of performance could be expected to differentiate between relatively lesser and
greater expertise. That is, even at 98–100% performance levels, where performance
measures cannot differentiate between trainee capacities, some individuals will be
performing at close to their peak performance, whereas others will be well below
their performance capacity. An assessment of the cortical activity necessary to
perform at a given level would indicate the cognitive resources available for more
situational demands, consistent with greater expertise. Whether fNIR can effectively
measure this cognitive reserve and make such fine-grained distinctions among levels
of performance capacity will be another target of current and future studies.
442 J Intell Robot Syst (2011) 61:423–443
Above all, it will be crucial to begin more ecologically valid work as soon as
possible. Studies with real UAV trainees and pilots at real ground stations are already
being planned pending the outcome of the current work.
Acknowledgements The authors would like to thank Adrian Curtin for programming and co-
designing the flight scenarios in the pilot evaluation experiments, and for developing the software
to facilitate data collection in those experiments. We would also like to thank Matthias Neusinger
for his FS Recorder add-on to Flight Simulator X, and for his work with Adrian on customizing it to
our needs.
References
1. Schmidt, J., Parker, R.: Development of a UAV mishap factors database. In: Association of
Unmanned Vehicle Systems, pp. 310–315. Washington, DC (1995)
2. Rash, C.E., Leduc, P.A., Manning, S.D.: Human factors in U.S. military unmanned aerial vehicle
accidents. Human Factors of Remotely Operated Vehicles 7, 117–131 (2006)
3. Manning, S., et al.: The role of human causal factors in US army unmanned aerial vehicle
accidents, USAARL Report No. 2004-11. U.A.A.R. Laboratory, Editor. Fort Rucker, AL (2004)
4. Shappell, S., Wiegmann, D.: The Human Factors Analysis and Classification System—HFACS.
Office of Aviation Medicine, Washington, DC (2000)
5. Hart, S.G., Staveland, L.E.: Development of the NASA-TLX (Task Load Index): results of the
experimental and theorectical research. Human Mental Workload, pp. 139–183 (1988)
6. Izzetoglu, M., et al.: Functional brain imaging using near infrared technology for cognitive
activity assessment. In: IEEE Engineering in Medicine and Biology Magazine, Special issue on
on the Role of Optical Imaging in Augmented Cognition, pp. 38–46 (2007)
7. Weibel, R.E., Hansman, R.J.: Safety Considerations for Operation of Unmanned Aerial Vehicles
in the National Airspace System. MIT International Center for Air Transportation, Cambridge
(2005)
8. Oh, P.Y., Valavanis, K., Woods, R.: UAV workshop on civilian applications and commercial
opportunities. In: IEEE International Conference on Robotics and Automation. Pasedena, CA
(2008)
9. Dixon, S.R., Wickens, C.D.: Control of Multiple-UAVs: a workload analysis. In: International
Symposium on Aviation Psychology. Dayton, OH (2003)
10. Kaber, D.B., Onal, E., Endley, M.R.: Design of automation for telerobots and the effect on per-
formance, operator situation awareness, and subjective workload. Hum. Factors Ergon. Manuf.
10(4), 409–430 (2000)
11. Murphy, R.: Human–robot interaction in rescue robotics. IEEE Trans. Syst. Man Cybern. 34(2),
138–153 (2004)
12. Endlesy, M.R.: Design and evaluation for situation awareness enhancements. In: Proceedings of
the Human Factors Society 32nd Annual Meeting. Santa Monica, CA (1988)
13. Izzetoglu, K.: Neural correlates of cognitive workload and anesthetic depth: fNIR spectroscopy
investigation in humans. In: Biomedical Engineering. Drexel University (2008)
14. Dinges, D.F., Powell, J.W.: Microcomputer analyses of performace on a portable, simple vi-
sual RT task during sustained operations. Behav. Res. Meth. Instrum. Comput. 17, 652–655
(1985)
15. Oldfield, R.C.: The assessment and analysis of handedness: the Edinburgh inventory. Neuropsy-
chologia 9(1), 97–113 (1971)
16. Izzetoglu, M., et al.: Functional brain imaging using near-infrared technology. IEEE Eng. Med.
Biol. Mag. 26(4), 38–46 (2007)
17. Izzetoglu, K., et al.: Functional optical brain imaging using near-infrared during cognitive tasks.
Int. J. Hum.-Comput. Interact. 17(2), 211–231 (2004)
18. Ayaz, H., et al.: Registering fNIR data to brain surface image using MRI templates. Conf. Proc.
IEEE Eng. Med. Biol. Soc. 1, 2671–2674 (2006)
19. Green, W.E., Oh, P.Y.: An aerial robot prototype for situational awareness in closed quarters.
In: IEEE International Conference on Intelligent Robots and Systems. Las Vegas, NV (2003)
20. Fong, T., Thorpe, C.: Vehicle teleoperation interfaces. Auton. Robots 11(1), 9–18 (2001)
J Intell Robot Syst (2011) 61:423–443 443
21. Chen, J.Y.C., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design
for teleoperated robots. IEEE Trans. Syst. Man Cybern., Part C Appl. Rev. 37(6), 1231–1245
(2007)
22. Van Erp, J.B.F.: Controlling unmanned vehicles: the human factors solution. In: RTO Meeting
Proceedings 44 (RTO-MP-44) B8.1–B8.12 (2000)
23. Glumm, M., Kilduff, P., Masley, A.: A Study on the Effects of Lens Focal Length on Remote
Driver Performance. Tech. Rep. ARL-TR-25. Army Research Laboratory (1992)
24. Kim, W.S., Hannaford, B., Fejczy, A.K.: Force-reflection and shared compliant control in oper-
ating telemanipulators with time delay. IEEE Trans. Robot. Autom. 8(2), 176–185 (1992)
25. Hing, J.T., Oh, P.Y.: Development of an unmanned aerial vehicle piloting system with integrated
motion cueing for training and pilot evaluation. J. Intell. Robot. Syst. 54, 3–19 (2009)
26. Izzetoglu, K., et al.: Functional optical brain imaging using near-infrared during cognitive tasks.
Int. J. Hum.-Comput. Interact. 17(2), 211–227 (2004)
27. Ayaz, H., et al.: Assessment of Cognitive Neural Correlates for a Functional Near Infrared-Based
Brain Computer Interface System. Foundations of Augmented Cognition. Neuroergonomics and
Operational Neuroscience, pp. 699–708 (2009)
28. Kelly, A., Garavan, H.: Human functional neuroimaging of brain changes associated with prac-
tice. Cerebral Cortex 15, 1089–1102 (2005)
29. Milton, J., Small, S., Solodkin, A.: On the road to automatic: dynamic aspects in the development
of expertise. Clin. Neurophysiol. 21, 134–143 (2004)
30. Beauchamp, M.H., et al.: Dynamic functional changes associated with cognitive skill learning of
an adapted version of the Tower of London task. NeuroImage 20, 1649–1660 (2003)
31. Drury, J.L., et al.: Comparing situation awareness for two unmanned aerial vehicle human
interface approaches. In: Proceedings of the SSRR 2006 Conference, Gaithersburg, MD (2006)
32. Drury, J.L., Scott, S.D.: Awareness in unmanned aerial vehicle operations. The International C2
Journal 2(1), 1–10 (2008)
33. Cooper, J., Goodrich, M.A.: Towards combining UAV and sensor operator roles in UAV-
enabled visual search. Proceedings of ACM/IEEE International Conference on Human-Robot
Interaction, pp. 351–358. Amsterdam, The Netherlands (2008)
34. Ayaz, H., et al.: Cognitive workload assessment of air traffic controllers using optical brain imag-
ing sensors. In: Marek, T., Karwowski, W., Rice, V. (eds.) Advances in Understanding Human
Performance: Neuroergonomics, Human Factors Design, and Special Populations, pp. 21–31.
CRC Press Taylor & Francis Group (2010)
... The intricate cognitive load within dynamic UAS environments emerges from a synthesis of information processing, intricate decision-making, and precise motor coordination [3]. Cognitive load theory, a foundation of this exploration, delineates intrinsic, extraneous, and germane loads shaping the cognitive landscape [4]. Within UAS operations, intrinsic load encapsulates the inherent complexity of tasks such as simultaneously interpreting sensor data streams, navigating complex airspace regulations, and anticipating rapidly changing environmental dynamics [5]. ...
... Cracknell AP [32] suggested that legislation on drone activities must be enacted as soon as possible to ensure that the lives and property of residents are not damaged by the massive use of drones. Menda et al. [33] argued that the operators of large drones for all types of industry must receive strict training and education and be informed of the relevant laws to avoid legal disputes and safety accidents arising from the work of professional drone operators. Khan et al. [34] analyzed the acceptance of drone delivery in Pakistan and found that residents of developing countries are concerned about the exposure of personal information in drone delivery, and the team called for the issue of privacy exposure to be effectively addressed in future drone operations. ...
Article
Full-text available
The usage of drone delivery couriers has multiple benefits over conventional methods, and it is expected to play a big role in the development of urban intelligent logistics. Many courier companies are currently attempting to deliver express delivery using drones in the hopes that this new type of tool used for delivery tasks will become the norm as soon as possible. However, most urban residents are currently unwilling to accept the use of drones to deliver express delivery as normal. This study aims to find out the reasons for the low acceptance of the normalization of drone delivery by urban residents and formulate a more reasonable management plan for drone delivery so that the normalization of drone delivery can be realized as soon as possible. A research questionnaire was scientifically formulated which received effective feedback from 231 urban residents in Jinjiang District, Chengdu City. A binary logistic model was used to determine the factors that can significantly influence the acceptance of residents. In addition, the fuzzy interpretive structural model(Fuzzy-ISM) was used to find out the logical relationship between the subfactors inherent to these influencing factors. It was concluded that when the infrastructure is adequate, increasing public awareness and education, enhancing the emergency plan, lowering delivery costs, enhancing delivery efficiency and network coverage, and bolstering the level of safety management can significantly raise resident acceptance of unmanned aerial vehicle(UAV) delivery. Given the positional characteristics of the subfactors in the interpretive structural model(ISM) and matrices impacts croises-multiplication appliance classemen(MICMAC) in this study, we should first make sure that the drone delivery activities can be carried out in a safe and sustainable environment with all the necessary equipment, instead of focusing on increasing the residents’ acceptance right away, in the future work of regularized drone urban delivery has not yet started the construction phase. There should be more effort put into building the links that will enable acceptance to be improved with higher efficiency, which will be helpful to the early realization of the normalization of drone urban delivery if there is already a certain construction foundation in the case where the drone delivery environment is up to standard and hardware conditions are abundant.
... Adding multiple types of feedback modalities can also help reduce the cognitive workload of the operator and improve their situational awareness (Menda et al., 2011). Other works have also succeeded in improving task efficiency and general communication between the swarm and the operator by using augmented reality (Walker et al., 2018) or blinking lights (May et al., 2015). ...
Article
Full-text available
Many people are fascinated by biological swarms, but understanding the behavior and inherent task objectives of a bird flock or ant colony requires training. Whereas several swarm intelligence works focus on mimicking natural swarm behaviors, we argue that this may not be the most intuitive approach to facilitate communication with the operators. Instead, we focus on the legibility of swarm expressive motions to communicate mission-specific messages to the operator. To do so, we leverage swarm intelligence algorithms on chain formation for resilient exploration and mapping combined with acyclic graph formation (AGF) into a novel swarm-oriented programming strategy. We then explore how expressive motions of robot swarms could be designed and test the legibility of nine different expressive motions in an online user study with 98 participants. We found several differences between the motions in communicating messages to the users. These findings represent a promising starting point for the design of legible expressive motions for implementation in decentralized robot swarms.
... Our lab has been utilizing fNIRS to assess the aforementioned phenomenon in UAS operators and air traffic controllers [7][8][9][10][11][12][13]. These studies have indicated considerable activation in the right medial and left dorsolateral prefrontal cortices in both novice and expert operators as they engaged in sustained attention and working memory tasks. ...
Conference Paper
Full-text available
This study aims to utilize a non-invasive and portable neuroimaging modality – functional near infrared spectroscopy (fNIRS) to investigate training and transfer of skills in unmanned aerial system (UAS) sensor operators (SOs). To achieve this objective, we recruited 13 novice participants and exposed them to three similar training sessions followed by a testing session on a UAS simulator. The training sessions occurred at 11AM (high visibility), while the testing session occurred at 6AM or 8PM (low visibility). Regardless of the session, the participants were asked to scan pre-defined areas to best of their abilities and identify targets (red bus). Behavioral results from training sessions indicated that some participants improved their scan performance, while others did not. No significant changes in target find performance were observed within and between groups. Associated average oxyhemoglobin (HbO) changes significantly decreased in right prefrontal cortex (PFC) regions for high performers and in left PFC regions for low performers. During the transfer task, scan performance was maintained by both groups, while average HbO significantly increased in left dorsolateral PFC of high performers and left and right anterior medial PFC of low performers. In conclusion, we demonstrated intraindividual differences in expertise development during multi-session training.
... The basic prototype interface design was referenced from a commercially available UAV ground control station (GCS) software, the Mission Planner (MP) by the ArduPilot Development Team. The Mission Planner interface was used as a reference because it includes most UAV interface components described in the current interface research [29][30][31][32][33][34][35][36]. Other commercial UAV interfaces typically use much simpler interface designs and focus on the camera view. ...
Article
Full-text available
A basic notion of transparency in automated systems design is the need to support user tracking and understanding of system states. Many usability principles for complex systems design implicitly target the concept of transparency. In this study, we made comparison of a “baseline” control interface mimicking an existing available UAV ground control station with an “enhanced” interface designed with improved functional transparency and usability, and a “degraded” interface which removed important design features. Each participant was extensively trained in the use of one of the interfaces and all simulated UAV control tasks. Each participant was tested in four trials of a typical military concept of UAV operation with different mission maps and vehicle speeds. Results revealed participants using the enhanced interface to produce significantly faster task completion times and greater accuracy across all UAV control tasks. The enhanced features were also found to promote operator understanding of the system and mitigate workload. By defining and setting automation transparency as an overarching design objective and identifying specific transparency and usability issues within existing GCS designs, we weress able to design and prototype an enhanced interface that more effectively supported human-automation interaction. Automation transparency as a high-level design objective may be useful for expert designers; whereas, usability design guidelines, as “building blocks” to transparency, may be a useful tool for new system designers.
... Robots Operadores Interfaz (Menda et al., 2011) 1 RA (Sim) 1 Inmersiva (Haas et al., 2011) 40 RA (Sim) 1 Multimodal (Kolling et al., 2012) 200 RT (Sim) 1 Convencional (Flushing et al., 2012) 250 RA (Sim) N Adaptativa (Cummings et al., 2013) 4 RA (Sim) 1 Convencional (Frische et al., 2013) 3 RA (Sim) 1 Adaptativa (Fuchs et al., 2014) 4 RA (Sim) 1 Convencional (Martins et al., 2015) 1 RT (Real) 1 Inmersiva (Ruiz et al., 2015) 3 RA (Sim) 1 Inmersiva (García et al., 2015) 1 RS (Sim) 1 Multimodal + Inmersiva (Peppoloni et al., 2015) 1 RM (Real) 1 Multimodal + Inmersiva (Hagiwara, 2015) 1 RT-Man (Sim) 1 Multimodal + Inmersiva (Soares et al., 2015) 1 RT-Man (Real) 1 Inmersiva (Recchiuto et al., 2016) 10 RA (Sim) 1 Inmersiva (Moore et al., 2016) 2 RT y 1 RA (Real) 1 Convencional (Yew et al., 2017) 1 RM (Real) 1 Inmersiva (Ruano et al., 2017) 1 RA (Sim) 1 Inmersiva (Almeida et al., 2017) 1 RT (Real) 1 Multimodal + Inmersiva 2 RA (Real) 1 Mult. + Inmer. ...
Article
Full-text available
p class="icsmabstract">Los sistemas multi-robot están experimentando un gran desarrollo en los últimos tiempos, ya que mejoran el rendimiento de las misiones actuales y permiten realizar nuevos tipos de misiones. Este artículo analiza el estado del arte de los sistemas multi-robot, abordando un conjunto de temas relevantes: misiones, flotas, operadores, interacción humano-sistema e interfaces. La revisión se centra en los retos relacionados con factores humanos como la carga de trabajo o la conciencia de la situación, así como en las propuestas de interfaces adaptativas e inmersivas para solucionarlos.</p
Article
Unmanned Aerial Vehicle (UAV) control interfaces are critical channels for transferring information between the vehicle and an operator. Research on system performance has focused on enhancing vehicle automation and some work has evaluated cognitive workload for existing UAV interfaces. The potential for usable interface design to reduce cognitive workload during the early design phase has been largely overlooked. This study addresses these gaps by: (1) evaluating the effectiveness of a contemporary UAV interface design tool (the Modified GEDIS-UAV) to moderate user workload; (2) examining the effectiveness of various UAV interface designs for minimizing cognitive workload under different control task pacing; and (3) exploring the use of eye tracking measures, traditionally applied in other domains, as indicators of cognitive workload in UAV operations. We prototyped three different interface designs, classified as “baseline”, “enhanced” and “degraded” interfaces. Cognitive workload in UAV operation was manipulated in terms of levels of vehicle speed (“low” and “high”). Physiological and subjective measures of workload were collected for all combinations of interface design and task demand. Results revealed the “enhanced” interface to yield the lowest operator cognitive workload and supported operator resilience to increased control task demand, as compared to the “baseline” and “degraded” interfaces. In addition, task demand was found to elevate operator cognitive workload, particularly in terms of "mental" and "temporal" demands and operator perceptions of "performance". The study also demonstrated utility of eye-tracking technology for detecting cognitive workload in UAV operations. This research provides practical guidance for UAV control interface design to manage operator workload. The methods employed in the study are applicable to interface evaluation for various types of UAVs and other unmanned systems to enhance human-automation interaction.
Article
Full-text available
Over the past decade, unmanned aerial vehicles (UAVs) have received a significant attention due to their diverse capabilities for non-combatant and military applications. The primary aim of this study is to unveil a clear categorization overview for more than a decade worth of substantial progress in UAVs. The paper will begin with a general overview of the advancements, followed by an up-to-date explanation of the different mechanical structures and technical elements that have been included. The paper will then explore and examine various vertical take-off and landing (VTOL) configurations, followed by expressing the dynamics, applicable simulation tools and control strategies for a Quadrotor. In conclusion to this review, the dynamic system presented will always face limitations such as internal and/or external disturbances. Hence, this can be minimised by the choice of introducing appropriate control techniques or mechanical enhancements.
Article
Full-text available
With the rapid rise in unmanned aerial vehicles (UAVs) for military and civil first-person applications like infrastructure inspection, there is an increased need for skilled UAV operators. However, research on effective training of UAV pilots has not kept pace with the demand. How much autonomy should be onboard, how much training, and how much control humans should have are still points of debate. To help fill this gap, this paper examines how different training programs and levels of control autonomy affect training outcomes for people operating a UAV in inspection tasks with high onboard autonomy. Results revealed a cost-benefit trade space in that those top performers with both lower-level teleoperation and higher-level supervisory control training could achieve the best performance, but with higher variability, as compare to those who received just supervisory control training. Another important finding was that those trainees who were overconfident were more likely to spend too much time micro-controlling the UAV, and also 15 times more likely to crash. Given that commercial UAV licensing is expected to significantly increase in the next few years, these results suggest more work is needed to determine how to mitigate overconfidence bias both through training and design.
Article
Full-text available
Despite the name Unmanned Aerial Vehicle (UAV), humans are integral to UAV operations. Since the UAV's operator interface is the primary facilitator of human-vehicle communication and coordination, a carefully designed interface is critical for successful UAV operations. To design an effective interface, it is essential to first determine the information needs for both the human and UAV components of the UAV system. We present the Human-UAV Awareness Framework, which we developed to inform UAV system design by detailing what information components should be provided to the human through the operator interface and to the vehicles as part of their onboard systems. Since there are a variety of UAV system designs, including a number of different possible human- UAV control schemes, the paper outlines the particular types of informa- tion that would be needed for two possible UAV system contexts: a base case, which assumes one human controller and one UAV, and a general case, which assumes n human controllers and m UAVs. The paper dis- cusses several practical considerations involved in applying the framework to UAV system design, including the level of automation of the UAVs, potential human-UAV control schemes, humans' roles, and interaction with UAV stakeholders.
Article
Full-text available
Fifty-four licensed pilots carried out multiple surveillance missions on two high-fidelity simulations representing unmanned aerial vehicles (UAVs). In Experiment 1, pilots were required to operate a single UAV through three different mission conditions: a baseline condition, one that offloaded relevant information to the auditory channel, and one that provided automation of flight path control. In Experiment 2, pilots operated two UAVs simultaneously through the same three mission conditions. Pilots were responsible for the following tasks: (1) mission completion, (2) target search, and (3) systems monitoring. Results of the experiment suggest that automation and auditory offloading can be beneficial to performance by reducing interference between tasks, and thus alleviating overall workload.
Article
The human-machine performance is analyzed by a dynamic control system of human-centered level of intelligence (LOA) related to operator and technological capabilities. Various monitoring system variables and generative process plans are allocated to a human operator and a computer system to establish a LOA taxonomy. Five function allocation schemes are examined for assessing telerobot system performance and operator situation awareness (SA) using the situation awareness global assessment technique. Automation failures are contributed to simulated system deficiencies necessitating operator detection and correction.
Article
DoD accidents are classified according to the severity of injury, occupational illness, and vehicle and/or property damage costs (Department of Defense, 2000). All branches of the military have similar accident classification schemes, with Class A being the most severe. Table 1 shows the accident classes for the Army. The Air Force and Navy definitions of Class A–C accidents are very similar to the Army's definition. However, they do not have a Class D. As the total costs of some Army UAVs are below the Class A criteria ($325,000 per Shadow aircraft; Schaefer, 2003), reviewers have begun to add Class D data into their analyses (Manning, Rash, LeDuc, Noback, & McKeon, 2004; Williams, 2004).
Article
For the period FY95-FY03, each Unmanned Aerial Vehicle (UAV) accident was reviewed and classified by a series of characteristics using two approaches. The first was a variant on a methodology referred to as the Human Factors Analysis and Classification System (HFACS). The HFACS captures data for four levels of human-related failure: Unsafe acts, preconditions for unsafe acts, unsafe supervision, and organizational influences. The second analysis approach was based on the accident methodology defined in Department of the Army Pamphlet 385-40, "Army accident investigation and reporting." Human causal factors are identified during this analysis and broken down into five types of failure: Individual failure, leader failure, training failure, support failure, and standards failure. Where the assignment of cause included human error, the accident data including narrative and findings were analyzed to identify specific human causal factors (e.g., high/low workload, fatigue, poor crew coordination, etc.). No single human causal factor was responsible for all accidents. However, both methods of analysis identified individual unsafe acts or failures as the most common human-related causal factor category (present in approximately 61 percent of the 18 human error related accidents).
Article
The effects of three lens focal lengths on remote driving performance were measured. The three focal lengths and their corresponding horizontal fields of view (FOVs) were 12 mm (29 deg), 6 mm (55 deg), and 3.5 mm (94 deg). On-board driving performance (direct view) was also measured. The study was conducted on an indoor test course consisting of six segments: straightaways, right-hand turns, left-hand turns, serpentine, figure 8, and obstacle avoidance. The findings indicate that for the first five segments of the course, driving speed and accuracy were significantly greater (p <.05) with the 6-mm lens than with either the 12-mm or the 3.5-mm lens. In the last course segment (obstacle avoidance), speed and accuracy were significantly less (p <.05) with the 12-mm lens than with either the 6-mm or the 3.5-mm lens. Differences between the latter two lenses in speed and accuracy were not statistically significant. For the first five segments of the course, significantly greater speed and accuracy (p <.05) was achieved during on-board operations than during operations in the remote mode using the 6-mm lens. In the obstacle avoidance segment, speed was also significantly greater in the on-board mode (p <.05), but there was no significant difference in speed between the 6-mm and the 3.5-mm lens. In this analysis, the 6-mm lens was found to be less accurate (p <.05) than the 3.5-mm lens or the on-board driving mode, but the significance of this difference was considered marginal. No significant difference in accuracy was found between on- board driving and remote operations using the 3.5-mm lens. .... Field of view, Remote control, Teleoperation, Focal length, Remote driving, Unmanned ground vehicle, Indirect vision, Robotics.
Article
Recent developments and experiences have proven the usefulness and potential of Unmanned Vehicles (UVs). Emerging technologies enable new missions, broadening the applicability of UVs from simple remote spies towards unmanned combat vehicles carrying lethal weapons. However, despite the emerging technology, unmanned does not implicate that there is no operator involved. Humans still excel in certain tasks, e.g. tasks requiring high flexibility or tasks that involve pattern perception, and decision making. An important subsystem in which the technology driven aspects and the human factors driven aspects of UVs meet is in the data-link between the remote vehicle and the operator. The human factors engineer wants to optimize operator performance, which may require a data-link with an extremely large capacity, while other design criteria typically limit the bandwidth (e.g. to lower costs, or because no more bandwidth is available in certain situations). This field of tension is the subject of the present paper. The paper describes two human factors approaches that may help to resolve this field of tension. The first approach is to reduce data-link requirements (without affecting operator performance) by presenting task-critical information only. Omitting information that is not needed by the operator to perform the task frees capacity. The second approach is to optimize performance by developing advanced interface designs which present task-critical information without additional claims on the data-link. An example will be given of both approaches.