ArticlePDF Available

Surface Telerobotics: Development and Testing of a Crew Controlled Planetary Rover System

Authors:

Abstract and Figures

During Summer 2013, we conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed mission, in which an astronaut in lunar orbit remotely operates a planetary rover to deploy a radio telescope on the lunar farside. In this paper, we present the design, implementation, and preliminary test results.
Content may be subject to copyright.
American Institute of Aeronautics and Astronautics
1
Surface Telerobotics: Development and Testing of a
Crew Controlled Planetary Rover System
Maria G. Bualat1, Terrence Fong, Mark Allan, Xavier Bouyssounouse, Tamar Cohen,
Lorenzo Flückiger, Ravi Gogna, Linda Kobayashi, Yeon Jin Lee, Susan Y. Lee,
Chris Provencher, Ernest Smith, Vinh To, Hans Utz, and DW Wheeler
NASA Ames Research Center, Moffett Field, CA 94035
Estrellina Pacis
Space and Naval Warfare Systems Center, San Diego, CA 92152
Debra Schreckenghost and Tod Milam
TRACLabs, Inc., Webster, TX 77598
and
David Mittman and R. Jay Torres
Jet Propulsion Laboratory, Pasadena, CA 91109
During Summer 2013, we conducted a series of tests to examine how astronauts in the In-
ternational Space Station (ISS) can remotely operate a planetary rover. The tests simulated
portions of a proposed mission, in which an astronaut in lunar orbit remotely operates a
planetary rover to deploy a radio telescope on the lunar farside. In this paper, we present the
design, implementation, and preliminary test results.
I. Introduction
N planning for future exploration missions, architecture and study teams have made numerous assumptions about
how crew can be telepresent on a planetary surface by remotely operating surface robots from space (i.e. from a
flight vehicle or deep space habitat).1,2,3,4,5 These assumptions include estimates of technology maturity, existing
technology gaps, and operational risks. These assumptions, however, have not been grounded by experimental data.
Moreover, to date, no crew-controlled surface telerobot has been fully tested in a high-fidelity manner.
To address these issues, we developed the “Surface Telerobotics” tests to do three things:
1) Demonstrate interactive crew control of a mobile surface telerobot in the presence of short com-
munications delay.
2) Characterize a concept of operations for a single astronaut remotely operating a planetary rover with
limited support from ground control.
3) Characterize system utilization and operator work-load for a single astronaut remotely operating a
planetary rover with limited support from ground control.
II. Proposed Lunar Waypoint Mission
Surface Telerobotics focused on simulating a
possible future human-robotic “Lunar Waypoint”
mission. Exploration of the farside of the Moon is
currently seen as a possible early goal for missions
beyond low Earth orbit using the Orion Multi-Purpose
Crew Vehicle (MPCV).
One leading concept, the “Orion MPCV L2-
Farside” mission, proposes to send a crewed MPCV to
the L2 Earth-Moon Lagrange point, where the com-
bined gravity of the Earth and Moon allows a space-
craft to easily maintain a stationary orbit over the lu-
nar farside6. From L2, an astronaut would remotely
operate a robot to perform high-priority surface sci-
ence work, such as deploying a polyimide film-based
radio telescope. Observations of the Universe’s first
1 Deputy Director, Intelligent Robotics Group, NASA Ames Research Center, M/S 269-3, Moffett Field, CA.
I
Figure 1. Lunar analog test site at the NASA Ames
Research Center.
AIAA SPACE 2013 Conference & Exposition
10 - 12 September 2013, San Diego, California
AIAA 2013-5475
American Institute of Aeronautics and Astronautics
2
stars/galaxies at low radio frequencies is a key science objective of the 2010 Astronomy and Astrophysics Decadal
Survey.7 Such a mission would also help prepare for subsequent deep-space human exploration missions. For exam-
ple, a similar strategy might be employed by humans to explore the surface of Mars from orbit5.
To study this human-robot exploration approach, Surface Telerobotics simulated four phases of the Orion MPCV
L2-Farside mission concept: pre-mission planning, site survey, simulated telescope deployment, and inspection of
deployed telescope. We performed these four phases in sequence. After pre-mission planning, we performed the
other three phases during three test sessions with ISS crew. Each crew session included an hour of on-board crew
training for the robot user interface, and two hours of mission operations.
A. Pre-Mission Planning
We performed the pre-mission planning phase in
Spring 2013. A mission planning team at NASA
Ames Research Center (ARC) and the University of
Colorado/Boulder used satellite imagery of a lunar
analog test site (Figure 1) at a resolution comparable
to what is currently available for the Moon
(approximately 0.5 m/pixel) and a derived digital
elevation map to select a nominal site for the telescope
deployment. In addition, the planning team created a
set of rover task sequences to scout and survey the
site, looking for potential hazards and obstacles to
deployment.
B. Phase 1: Site survey
On June 17, 2013, Chris Cassidy (Figure 2, top),
an astronaut on the ISS, remotely operated the NASA
Ames K10 planetary rover for two hours to survey the
test site from surface level. The survey data that was
collected with K10 enabled identification of surface
characteristics such as terrain obstacles, slopes, and
undulations that are either below the resolution, or
ambiguous due to the nadir pointing orientation, of
orbital instruments. The mission planning team then
analyzed the data and developed the final rover task
sequences for telescope deployment.
C. Phase 2: Payload deployment
During the second test session on July 26, 2013,
European Space Agency Astronaut Luca Parmitano
(Figure 2, middle) remotely operated K10 for just over
two hours to deploy three polyimide film-based
antenna arms of a simulated telescope array.
Parmitano first executed each deployment task
sequence with the deployment device disabled, to
verify that the sequence is feasible. A high resolution, downward pointing camera focused on the film to document
and to enable the astronaut to observe the deployment. Then, during actual deployment, Parmitano monitored both
rover driving and antenna arm deployment. After the session was finished, the mission planning team reviewed de-
ployment imagery and developed a set of rover inspection plans.
D. Phase 3: Payload inspection
During the final test session on August 20, 2013, NASA Astronaut Karen Nyberg (Figure 2, bottom) remotely
operated K10 to perform detailed visual inspection of the deployed telescope. The primary objective for this phase
was to obtain oblique, high-resolution camera views to document the deployed polyimide film antenna arms. A sec-
ondary objective was to search for possible flaws (e.g., folds and tears) in the material. Based on the inspection data,
the mission planning team was then able to determine whether it would be necessary to repair, or replace, sections of
the telescope array.
Figure 2. Astronauts Chris Cassidy (top), Luca Parmi-
tano (middle), and Karen Nyberg (bottom) remotely
operate the K10 rover from the ISS.
American Institute of Aeronautics and Astronautics
3
III. System Description
A. K10 Planetary Rover
The NASA Ames K10 planetary rover is shown in
Figure 3. K10 has four-wheel drive, all-wheel
steering and a passive averaging suspension. The
suspension design helps balance wheel/soil forces and
reduces the transmission of motion induced by travel
over uneven ground. K10 is capable of fully
autonomous operation on moderately rough natural
terrain at human walking speeds (up to 90 cm/s).
K10’s standard sensors include a Novatel
differential GPS system and inertial measurement
unit, a Honeywell digital compass, Point Grey
Research IEEE 1394 stereo cameras, a Velodyne 3D
scanning lidar, an Xsens inertial measurement unit, a
suntracker, and wheel encoders.
K10’s avionics design is based on commercial
components. The robot is powered by twenty-four
hot-swappable Inspired Energy 14.4V, 6.6 AH Li-Ion
smart battery packs. K10’s controller runs on a
Linux-based laptop and communicates via a Tropos
802.11g mesh wireless system.
The K10 controller is based on our Service-
Oriented Robotic Architecture (SORA).8 Major
services include locomotion, localization, navigation, and instrument control. SORA uses high-performance mid-
dleware to connect services. Dependencies between services are resolved at service start. This approach allows us to
group services into dynamic libraries that can be loaded and configured at run-time.
B. Science Instruments
To perform survey and inspection, we equipped the K10 rover with a panoramic camera and an inspection cam-
era. Both instruments can provide contextual and targeted high-resolution color imaging of sunlit areas. These in-
struments are used for both science observations and situation awareness during operations.
The panoramic camera is a consumer-grade, 12 megapixel, digital camera (Canon PowerShot G9 camera) on a
pan-tilt unit. We operate the camera at 350 rad/pixel, which is comparable to the Mars Exploration Rover Pancam
(280 rad/pixel). K10’s panoramic camera, however, can be reconfigured for different resolutions by changing zoom.
Images are mosaiced in software to create wide-field panoramic views.
The inspection camera uses the same camera
model as the panoramic camera. However, the
inspection camera is attached to K10 with a fixed rear-
pointing mount. The inspection camera is used to
observe telescope film deployment as well as to
perform inspection.
C. Film Deployer
Together with the University of Idaho, we devel-
oped and integrated a rear-mounted polyimide film
deployer for the K10 rover (Figure 4). The deployer
spools out 60 cm-wide polyimide film, as a proxy for
a lunar radio antenna, while the rover traverses
planned deployment paths On-board software controls
deployment: starting, stopping, and adjusting the ten-
sion on the film. For purposes of these tests, the film
does not contain antenna or transmission line traces.
D. User Interface
Figure 3. K10 planetary rover "Red” is equipped with
a variety of sensors and instruments.
Figure 4. K10 deploys polyimide film to simulate de-
ployment of a polyimide-based lunar radio telescope.
American Institute of Aeronautics and Astronautics
4
The “Surface Telerobotics Workbench” (Figure
5) is used by ISS crew to remotely operate K10. The
Workbench runs on a Space Station Computer is
based on the “Visual Environment for Robotic
Virtual Exploration” (VERVE), which is an
interactive, 3D user interface for visualizing high-
fidelity 3D views of rover state, position, and task
sequence status on a terrain map in real-time.9
VERVE also provides status displays of rover
systems, renders 3D sensor data, and can monitor
robot cameras. VERVE runs within the NASA
Ensemble framework (based on the Eclipse RCP)
and supports a variety of robot middleware,
including the NASA Robot Application Pro-
gramming Interface Delegate (RAPID).10
E. Communications
Figure 6 shows a simplified communications diagram for the Surface Telerobotics tests. Voice communications
were carried over standard ISS and Payload Operations voice loops. The Rover Operations Lead at ARC communi-
cates with the Payload Operations Director (POD) and Payload Communications Manager (PAYCOM) over the
POD loop. Information for crew is relayed through PAYCOM over Space To Ground. Internal communications
among the Rover Team Operations, Science, Engineering, Logistics, Proxy, and Plug-in Port Utilization Officer
(PLUTO) Support occurres over the SS Coord voice loop.
Rover telemetry and commanding between the K10 rover and a laptop on the ISS is carried through a secure
network data connection between the rover and the ISS Mission Control Center (MCC), which uses a proxy server
machine on the MCC network. From there, traffic between the ISS laptop and the proxy server uses standard ISS
Ku-band data communications through TDRSS (Tracking and Data Relay Satellite System).
Figure 6. Voice and telemetry communications links. Voice loops are shown in blue. Telemetry
(data) channels shown in green.
Figure 5. The “Visual Environment for Robotic Vir-
tual Exploration” (VERVE) is an interactive 3D user
interface for robot operations.
American Institute of Aeronautics and Astronautics
5
IV. Assessment Approach
A. Characterize the Concept of Operations
A key objective of the Surface Telerobotics tests was to characterize a concept of operations (or “conops) for a
single crew member supervising a remote planetary rover with limited support from ground control on Earth. The
concept of operations involves the following:
The primary operations mode is supervisory control: task sequencing with interactive monitoring.
The secondary operations mode is manual control (teleoperation): discrete commanding (relative position
motions, direct instrument commands)
Ground control handles major contingencies and strategic planning.
The crew is responsible for tactical execution and modifications (minor deviations from the strategic
plan) to handle minor contingencies and to achieve secondary mission objectives.
By characterizing this conops, we expected to better understand mission design and architectures, including mis-
sion timelining, mission duration and tempo, and how to best interleave different mission phases.
1. Background
To characterize the conops, we assessed the crew’s situation awareness (SA) at all three stages of Endsley’s
model of SA formation11:
Level 1 SA (Perception): What is the status, attributes, and dynamics of the elements relating to the envi-
ronment, system, people, etc.?
Level 2 SA (Comprehension): What is the impact of the perceptions?
Level 3 SA (Projection): How are future states affected?
We also considered the five awareness categories (LASSO)12 used in the Urban Search and Rescue domain to
help assess the operator’s understanding of the robot. These aid in understanding what types of information crew
uses at each level of SA.
Location awareness
Activity awareness
Surroundings awareness
Status awareness
Overall mission awareness
To collect information about the crew’s SA, we employed SAGAT13 questionnaires. We also used the Bedford
Workload Scale14 to obtain subjective assessment of crew workload.
2. SAGAT
SAGAT queries were presented to crew at random times throughout each test session on a secondary laptop.
Crew was required to look away from the primary laptop that hosted the rover user interface while answering SA-
GAT questions.
Development of the SAGAT queries is based on a high level goal-directed task analysis in order to understand
what aspects of the situation contribute to crew’s SA. The analysis was cross-referenced with the LASSO categories
to understand what aspects of the robot crew must understand.
Questions were then formulated based on the decisions we expected crew to make and referenced against the
types of information provided to crew at all times through the user interface. The terminology used was the same as
that used in the training materials provided to crew. The list of questions was cross-referenced with the SA Levels
and LASSO categories.
B. Characterize the System
Another objective of these tests was to characterize system utilization and performance for a single crewmember
supervising a remote planetary rover with limited backup from ground control. We then collected a variety of engi-
neering data (rover position, power levels, health, data messages, etc.) to assess what is needed to make such a sys-
tem work.
Our system is defined by the following:
Robot: ARC K10 rover with the following instruments:
o panoramic color camera system (used for survey and inspection)
American Institute of Aeronautics and Astronautics
6
o forward-facing monochromatic stereo cameras (used for driving)
o single rear-facing camera (used for antenna film deployment monitoring and inspection)
User interfaces
o Surface Telerobotics Workbench rover monitoring and task sequence editing tool for use by both the
ground team and crew
Middleware
o RAPID
1. Metrics
The rover system, including remote supervisory tools, can be characterized by computing metrics for mission
success, robot asset utilization, task sequence success, system problems, and robot performance. For this objective
we rely on metrics based on rover telemetry monitoring supplemented by human observations made during the
evaluation. We can compute metrics using logs of rover telemetry recorded during each test session. Data from these
logs can also be played back through performance monitoring software that encodes metrics algorithms and saves
metrics values to file.
The Surface Telerobotics sessions simulated operations that include some anomalies for the astronaut to handle.
These anomalies included: (1) obstacles in the rover’s path that required the astronaut to teleoperate the rover around
them, (2) poor quality images that required the astronaut to retake the image, and (3) low power levels that required
the astronaut to abandon nominal rover activities and discuss the issue with ground control. This knowledge of the
planned activities and expected anomalies can be used to define the expected values for the mission “as planned”.
The metrics that characterize the system based on rover telemetry are described below.
2. Mission Success Metrics
Mission Success metrics indicate whether rover task sequences complete successfully and have the intended ef-
fect. Examples of questions answered by metrics for mission success include:
What percentage of the task sequences: (a) completed normally, (b) ended abnormally (failed tasks,
aborted tasks, tasks not attempted), or (c) were not attempted?
What percentage of the task sequences was scheduled and what percentage was unscheduled (i.e., in re-
action to anomalies)?
Using these metrics we can determine whether task sequences ended normally or not, the percentage of task se-
quences that were scheduled and unscheduled, if scheduled or unscheduled activities ended abnormally, whether all
data were collected, and if the telescope arms were deployed as planned. To identify if certain types of task se-
quences are causing the astronaut more difficulty, abnormal task sequence completion and the number of task se-
quence repeats is tracked by type of task sequence.
3. Robot Asset Utilization Metrics
Robot Asset Utilization characterizes how the robot system was used, and whether this utilization contributed to
mission success. Examples of questions answered by metrics for robot asset utilization include:
What percentage of the time did the robot spend on different types of tasks (traverse, panoramic imaging,
inspection imaging)? How did actual time in task compare to the expected time?
Did rover drive the expected distance?
Using these metrics we can determine how rover time was spent during the session, including how much time
the rover was waiting or idle. We also can compare how much time was spent doing different types of tasks relative
the expected time on these tasks. Assuming we can make reasonable estimates of time for different tasks, this should
help identify types of tasks where problems occurred.
4. Task Success Metrics
Task Success metrics characterize the individual tasks performed by the rover, such as drive to a waypoint, or
take an image. Metrics for task success are similar to the mission success metrics, except they are computed for in-
dividual tasks in the task sequence instead of looking at the task sequence as a whole. Example questions answered
by metrics for task success include:
What percentage of the tasks: (a) ended normally, (b) ended abnormally, or (c) were not attempted?
Show this per session and per task sequence.
American Institute of Aeronautics and Astronautics
7
What percentage of the tasks that ended abnormally was in scheduled task sequences, and what percent-
age was in unscheduled task sequences?
Using these metrics we can determine whether tasks ended normally or not, the percentage of tasks that were
scheduled and unscheduled, and if scheduled or unscheduled tasks ended abnormally. To identify if certain types of
tasks caused the astronaut difficulty, we could track the number of task re-tries and whether retried tasks are suc-
cessful.
5. System Problem Metrics
System Problem metrics identify what system anomalies were observed and how much time was spent in han-
dling them. One observable indicator of a system problem is the detection of a Caution and Warning (C&W) state.
For K10, this includes joint errors, subsystem errors, and planner failures. Given that the robot was remotely oper-
ated in the controlled environment of the ARC test facility, not many of these errors occured. Consequently, our
approach is to measure the number of C&W states that occur for each session and the time spent handling them, but
we do not consider the time spent handling these problems as human intervention (within the simulation). The
following rover C&W states are considered:
Emergency stop
Position error
Steering warning and error
Over current
Navigator or locomotor subsystem failure
Panoramic camera subsystem failure
Inspection camera subsystem failure
Mission subsystem failure
For these sessions, unscheduled human intervention occured in response to contingencies. Contingencies that the
astronaut was trained to handle include re-taking an image, moving the rover around an obstacle in the path, and
redirecting the rover activities if power levels dropped too low. We can measure the Mean Time To Intervene
(MTTI) as the mean time spent executing unscheduled task sequences, and Mean Time Between Interventions
(MTBI) as the mean time between unscheduled task sequences15. Together MTTI and MTBI indicate the portion of
the session spent by the astronaut handling rover contingencies.
6. Robot Performance Metrics
Robot Performance metrics characterize how well the robot performs assigned tasks. We measure the timeliness
of robot task performance as the ratio of the actual time to execute a task sequence or task to the expected time to
execute a task sequence or task. The usefulness of these measures depends on the ability to obtain good task se-
quence duration estimates. Metrics for robot performance answer the following questions:
Did the rover take the expected time to execute task sequences? How many task sequences took longer
than expected? How much longer?
Did the rover take the expected time to execute tasks? How many tasks took longer than expected? How
much longer?
Measures of mission success and task success indicating abnormal performance also can help characterize robot
task performance. We could measure counterproductive motion of the robot (i.e., motion away from the target) as an
indicator of traverse effectiveness. For these sessions, however, counterproductive motion was minimal, so we do
not believe such measures would be particularly diagnostic.
V. Preliminary Results
Our initial analysis focused on characterizing robot utilization based on the K10 rover telemetry that we recorded
during the Surface Telerobotics tests. We used a variety of metrics based on earlier work in computing human-robot
performance in real-time.16 In this section we describe these performance metrics and report results for Session 1.
We expect to perform similar analyses for Sessions 2 and 3 using telemetry recorded from the K10 rover. During
Session 1 Astronaut Chris Cassidy completed all the task sequences of Phase 1 and continued on to five of seven
sequences of Phase 2.
American Institute of Aeronautics and Astronautics
8
Performance metrics are computed by (1) partitioning each phase into meaningful categories of work and rest
(called wait periods), (2) detecting events that indicate transitions between these categories, and (3) aggregating the
time spent in each category. The work and wait periods are defined such that only one category applies at any time.
Work Periods
Execute: corresponds to all work done when a planned autonomous rover task is active. The astronaut
may perform supervisory tasks in parallel with the rover, depending upon type of rover task.
Teleops: corresponds to the work done when the astronaut manually tele-operates the rover.
Idle_in_Plan: corresponds to work done by the astronaut in support of the rover’s planned tasks. For ex-
ample, the rover is paused while the astronaut inspects images taken during antenna deployment.
Questionnaire: corresponds to work done by the astronaut answering questions assessing situation
awareness and workload. During this work period the rover is paused.
Wait Periods
Time_before_Start: corresponds to the time period after a task sequence is selected for execution but be-
fore the first task in the task sequence is executed.
Wait_between_Plans: corresponds to the time period when the rover has no task sequence to perform.
LOS: corresponds to the time period when all work is paused due to loss of communication signal.
Time_in_Problem: corresponds to time when the rover is paused due to a problem.
All six Phase 1 task sequences were performed with no significant problems. Five panoramas were taken as ex-
pected, two of them with pointing contingencies where the image was taken in wrong direction. In one of the two
panorama contingencies, the astronaut re-took the image from the correct direction. 16% of Phase 1 was spent with
the astronaut tele-operating the robot out of planned rover trap contingencies.
Five of seven Phase 2 task sequences were completed normally. The sixth task sequence was paused partway
through because the USB bus on the rover went down. Because of this problem, Session 1 was ended approximately
10 minutes early without completing task sequences 2.06 and 2.07. There were no tele-operations during Phase 2
because there were no rover trap contingencies in this phase. Phase 2 had a larger amount of time idle in the task
sequence because the rover was paused while the crew reviewed the inspection images during antenna deployment.
The astronaut filled out 4 questionnaires in each phase. More time was spent answering questions in Phase 1 be-
cause problems with the questionnaire form required the astronaut to fill it out differently than originally instructed.
Phase 1 has almost no time lost to LOS while 24% of Phase 2 was spent in LOS. Figure 7 (a) and (b) summarize
work breakout for Phases 1 and 2, respectively
.
Figure 7. Work and Wait periods for Phase 1 (a) and Phase 2 (b) of Session 1, respectively.
American Institute of Aeronautics and Astronautics
9
Productivity Measures: Productivity refers to the time during a phase when the astronaut and rover are per-
forming tasks that contribute toward the mission objectives. For this experiment the four work periods described
earlier are considered productive. Consequently productive time (PT) for each phase is the sum of all work periods
for that phase. Overhead time (OT) refers to the time during a phase when the astronaut and rover are not waiting,
and is the sum of all wait periods for the phase. %PT is the percentage of the phase in productive time. %OT is the
percentage of the phase in overhead time. The ratio of PT to OT is called Work Efficiency Index.17 For this analysis
we remove Loss-of-Signal (LOS) time from the time in phase. Table 1 shows the productivity measures for Session
1. In summary the productivity %PT of the human-robot team averaged ~65% for Session 1.
Distance Traveled Measures: Distance
traveled is the total distance driven by the K10
rover during each task sequence, whether re-
motely driven with supervisory or manual con-
trol. In Session 1, the rover performed eleven
task sequences, which covered a total distance
of 221 m.
Figure 8 shows distance traveled in each
task sequence. The rover covered an average
distance of 20 m per task sequence. The longest
task sequence covered a distance of 49 m.
When K10 was operated using supervisory con-
trol, the rover drove at average speed of
40 cm/s. The average speed over the total dura-
tion of Session 1, which lasted 96 min, was
approximately 3.8 cm/s.
VI. Conclusion
We developed the Surface Telerobots tests to study how astronauts in a flight vehicle can remotely operate a sur-
face robot across a short time delay. We carried out three test sessions in Summer 2013 during ISS Increment 36 and
collected a wide range of engineering data to improve our understanding of how to: (1) deploy a crew-controlled
telerobotics system for performing surface activities and (2) conduct joint human-robot exploration operations.
Three ISS astronauts (Chris Cassisdy, Luca Parmitano, and Karen Nyberg) remotely operated the K10 rover for
a combined total of approximately 10.5 hr to simulate a proposed, future lunar exploration mission. The astronauts
used a combination of supervisory control (task sequencing) and teleoperation (discrete commanding) and to re-
motely operate K10 in an outdoor test area at the NASA Ames Research Center. The astronauts monitored the rover
interactively, with only minimal (500-750 msec) communications latency and intermittent LOS periods.
Preliminary data analysis suggest that the technologies developed for analog ground simulations of remote su-
pervisory control of a planetary rover remain highly effective when used on-orbit. In addition it appears that (1)
rover autonomy, particularly hazard detection and safeguarding, greatly enhanced operational efficiency and robot
utilization; (2) interactive 3-D visualization of robot state and activity reduced operator workload and increased
situation awareness; and (3) command sequencing with interactive monitoring was a highly effective strategy for
crew-centric surface telerobotics.
We plan to perform detailed data analysis during Fall 2013, with emphasis on characterizing the use and per-
formance of rover software, crew user interfaces, and operations protocols. The results and lessons learned from
Surface Telerobotics will be used to further mature technologies required for future deep-space human missions,
including robot planning and commanding interfaces, automated summarization and notification systems for situa-
tion awareness, on-board robot autonomy software, data messaging, and short time-delay mitigation tools.
Figure 8. Distance traversed by the K10 rover during each
task sequence of Session 1.
Table 1. Productivity measures for Session 1.
Productivity
Total Time
PT
OT
%PT
%OT
WEI
Phase 1
0:50:01
0:34:58
0:15:03
69.90
30.10
2.32
Phase 2
0:46:19
0:28:00
0:18:19
60.45
39.55
1.53
American Institute of Aeronautics and Astronautics
10
Acknowledgments
First and foremost, we would like to thank Jack Burns, Laura Kruger, and the Lunar University Network for As-
trophysics Research (LUNAR) for developing the Orion MPCV L2-Farside mission concept and for their support of
Surface Telerobotics. We also thank Josh Hopkins, William Pratt, and Chris Norman of Lockheed Martin Corpora-
tion for insightful discussions on the Orion MPCV. Sophie Milam and George Korbel of the University of Idaho
developed the polyimide film deployer for K10. Industrial design students from the Academy of Art University in
San Francisco collaborated to create the Surface Telerobotics Workbench.
We would like to acknowledge the dedication and tireless effort of crew office (particularly Chris Cassidy, Luca
Parmitano, and Karen Nyberg), the JSC Mission Operations Directorate (particularly Mike Halverson), the NASA
Lunar Science Institute, the ISS Tech Demonstration office, ISS Avionics and Software and NASA public affairs
(particularly Rachel Hoover, Dave Steitz, and Maria Quon). Brett Beutter, Don Kalar, Josh Hopkins, William Pratt
and Chris Norman all served as simulated crew during operational readiness tests.
We especially thank Bonnie James, Randy Lillard, the NASA Technology Demonstration Missions Program of-
fice, Jason Crusan, and Chris Moore for their continued advocacy and support. The NASA Technology Demonstra-
tion Missions Program (NASA Space Technology Mission Directorate) provided funding for this work.
This paper is dedicated to the memory of Astronaut Janice Voss, who helped plan Surface Telerobotics and who
served as the initial crew office liason for the project.
References
1 Augustine, N., et al., “Seeking a Human Spaceflight Program Worthy of a Great Nation,” Review of U.S. Human Spaceflight
Plans Committee, Doc No. PREX 23.2:SP 1/2, 2009.
2 Hopkins, J., “Stepping Stones: Exploring Increasingly Challenging Destinations on the Way to Mars”, briefing to Human
Spaceflight Architecture Team, Lockheed Martin.
3 Korsmeyer, D., Landis, R., et al. “A Flexible Path for Human and Robotic Space Exploration”, 2010 AIAA Space Operations
Conference, AIAA, Washington, DC, 2010.
4 NASA, “Consolidated Destinations Cycle B Briefing,” Human Spaceflight Architecture Team, July 12, 2011.
5 Nergaard, K., de Frescheville, F. B., et al., “METERON CDF Study Report: CDF-96(A)” European Space Agency, 2009.
6 Burns, J. O., et al, “A Lunar L2-Farside Exploration and Science Mission Concept with the Orion Multi-Purpose Crew Vehicle
and a Teleoperated Lander/Rover,” Advances in Space Research 52, 2013.
7 Committee for a Decadal Survey of Astronomy and Astrophysics, 2010, National Research Council, New Worlds, New Hori-
zons in Astronomy and Astrophysics, The National Academies Press , Washington, DC, 2010.
8 Flückiger, L., and Utz, H., "Field tested service oriented robotic architecture: Case study," International Symposium on Artifi-
cial Intelligence, Robotics, and Automation in Space (iSAIRAS), 2012.
9 Lee, S. Y., et al, “Reusable science tools for analog exploration missions: xGDS Web Tools, VERVE, and Gigapan Voyage,”
Acta Astronautica, Vol. 90, No. 2, October 2013, pp. 268-288.
10 Torres, R. J., Allan, M., Hirsh, R., Wallick, M.N., “RAPID: Collaboration results from three NASA centers in command-
ing/monitoring lunar assets,” IEEE Aerospace Conference, IEEE, 2009.
11 Endsley, M. R, “Measurement of Situation Awareness in Dynamic Systems, Human Factors, The Journal of the Human Fac-
tors and Ergonomics Society, Vol. 37, No. 1, March 1995, pp. 65-84.
12 Drury, J. L., Keyes, B., Yanco, H. A.,LASSOing HRI: analyzing situation awareness in map-centric and video-centric inter-
faces,2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2007.
13 Endsley, M. R., and Garland, D. J., eds, Situation awareness analysis and measurement, CRC Press, Boca Raton, FL, 2000.
14 Roscoe, A.H., and Ellis, G.A., "A subjective rating scale assessing pilot workload in flight. A decade of practical use," Royal
Aerospace Establishment, Technical Report 90019. Farnborough, UK: Royal Aerospace Establishment, 1990.
15 Arnold, J., “Towards a framework for archi-tecting heterogeneous teams of humans and robots for space exploration”, M.S.
Thesis, Dept. of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, 2006.
16 Schreckenghost, D., Fong, T., and Milam, T., “Measuring Performance in Real-time during Remote Human-Robot Operations
with Adjustable Autonomy,” IEEE Intelligent Systems, Vol. 25, No. 5, Sept./Oct. 2010, pp. 36-45.
17 Gernhardt, M., “Work Efficiency Indices,” Presentation at Johnson Space Center, November 15, 2005.
... Experiments on the ISS demonstrated that astronauts could maintain appropriate SA with a low effort and workload when using a supervisory control method while ensuring overall mission success, for low latency conditions (< 0.5 seconds) [19], [22], [25], [28]- [30]. Nevertheless, there are often events that current state-of-the-art autonomy fails to solve and requires human intervention through direct teleoperation. ...
... Furthermore, Storms [40] introduced a shared control method for teleoperation under less than one second of latency, where autonomy elements handled obstacle avoidance and control arbitration. While, supervisory control revealed to be one of the most adopted strategies for planetary exploration [17], [19], [22], [28]- [30], [41], [42]. This strategies allows taking advantage of robot autonomy integrated while performing high-level decision-making by the human crew. ...
Article
Full-text available
Adding human cognitive skills to planetary exploration through remote teleoperation can lead to more effective and valuable scientific data acquisition. Still, even small amounts of latency can significantly affect real-time operations, often leading to compromised robot safety, goal overshoot, and high levels of human mental fatigue and cognitive workload. Thus, novel operational strategies are necessary to cope with these effects. This paper proposes three augmented teleoperation interfaces that allow the user to operate a robot subject to 3 seconds of latency: (1) Avatar-Aided Interface (AAI), a semi-autonomous approach based on a virtual element; (2) Predictive Interface (PI), an approach with direct control and predictive elements; and (3) Hybrid Interface (HI), where the operator can easily switch between PI and AAI.We conducted a systematic within-subject experiment to evaluate the proposed interfaces in a realistic virtual environment with frequent traction losses. The user study compared AAI and PI to a Control Interface (CI), which did not display any augmented elements. The main results of this comparison showed that: (1) AAI led to a significant reduction in workload and a significant increase in usability and robot safety; (2) the use of the PI caused a significant increase in path length, indicating that operators overshoot their goals more often with this approach; (3) PI and AAI had lower reported effort; and (4) AAI is more flexible and effortless than PI and CI. Finally, the presented results show the need to consider uncertainty (e.g., traction loss) in future interface design.
... As an example, in future NASA and ESA human exploration missions, low latency rover control is foreseen from the crewed module of the Lunar Orbital Platform -Gateway (LOP-G) to carry out astronaut assisted sample return and deployment/construction of a low-frequency radio telescope array [2]. [3] reports the campaign results of how astronauts in the International Space Station (ISS) have remotely operated a planetary rover for telescope deployment on Earth. The test has been performed by the astronauts using a Space Station Computer (Lenovo Thinkpad laptop), supervisory control (command sequencing with interactive monitoring), teleoperation (discrete commanding), and Kuband satellite communications to remotely operate a rover. ...
... As an example, in future NASA and ESA human exploration missions, low latency rover control is foreseen from the crewed module of the Lunar Orbital Platform -Gateway (LOP-G) to carry out astronaut assisted sample return and deployment/construction of a low-frequency radio telescope array [2]. [3] reports the campaign results of how astronauts in the International Space Station (ISS) have remotely operated a planetary rover for telescope deployment on Earth. The test has been performed by the astronauts using a Space Station Computer (Lenovo Thinkpad laptop), supervisory control (command sequencing with interactive monitoring), teleoperation (discrete commanding), and Kuband satellite communications to remotely operate a rover. ...
Preprint
Full-text available
In this paper, we present a user-friendly planetary rover's control system for low latency surface telerobotic. Thanks to the proposed system, an operator can comfortably give commands through the control base station to a rover using commercially available off-the-shelf (COTS) joysticks or by command sequencing with interactive monitoring on the sensed map of the environment. During operations, high situational awareness is made possible thanks to 3D map visualization. The map of the environment is built on the on-board computer by processing the rover's camera images with a visual Simultaneous Localization and Mapping (SLAM) algorithm. It is transmitted via Wi-Fi and displayed on the control base station screen in near real-time. The navigation stack takes as input the visual SLAM data to build a cost map to find the minimum cost path. By interacting with the virtual map, the rover exhibits properties of a Cyber Physical System (CPS) for its self-awareness capabilities. The software architecture is based on the Robot Operative System (ROS) middleware. The system design and the preliminary field test results are shown in the paper.
... Ground control teams that operate these rovers generally comprise a robot operations team supported by science and robot engineering groups [1,2]. In these teams, data sharing and the flow of control is aligned with team size, organization, and hierarchy. ...
Conference Paper
Full-text available
Tele-operated rovers have been a feature of space exploration for decades. Ground control teams that operate these rovers generally comprise a robot operations team supported by science and robot engineering groups. But, in the future, astronauts will also remotely operate rovers. Several studies have proposed that astronauts should be able to control rovers from orbiting spacecraft such as the Deep Space Gateway (DSG). This concept of operations offers several benefits to human exploration. Firstly, it will enable astronauts to expand their sphere of influence beyond the confines of a spacecraft. Secondly, it will enable astronauts to safely perform surface work via an avatar. Thirdly, it will reduce the expenditure of life support consumables, and fourthly, it will spare astronauts from spending time on radiation-ravaged planetary surfaces. But integrating tele-operated rovers into human space exploration raises important questions. What system configurations are effective? Which modes of operation and control are most appropriate? When is it appropriate to rely (or not) on tele-operated rovers? The proposed research sought to answer the first two of these questions. It was designed to simulate three mission phases: pre-mission planning, site survey, and surface asset deployment. The study employed a derived terrain model located in the Lunares hab facility (located in Pila, Poland). Operators drove a Turtle rover through task sequences to survey sites while avoiding hazards/obstacles. This study simulated remote rover operations that assessed crew workload, crew situation awareness, robot asset acquisition, task sequence success, system issues, and rover performance. This study demonstrated basic competence in teleoperated rover driving, but more work must be conducted to produce a system that can behave reliably over many weeks and/or kilometers. Results indicated interactive monitoring is an effective strategy for crew-centric surface telerobotics. Safeguarded driving using this mode of operation enabled participants to perform each task successfully-success being measured by the metric of completing assigned tasks and completing the course. Participants maintained good situational awareness (SA) with low effort using interactive visualization of the rover state. From post-test debriefs it was determined participants maintained a high level of SA during operations and that the activity employed via the operator interface was a contributing factor to achieving these high levels. Since the test sessions were designed to be increasingly difficult in terms of task complexity it was expected that SA would decrease and task loading would increase across tasks and the data confirms this was the case.
Article
Perspective-taking and attentional switching are some of the ergonomic challenges that existing teleoperation human-machine interface designs need to address. This study developed two gaze interaction methods, the Eye Stick and the Eye Click, which were based on the joystick metaphor and the navigation metaphor, respectively, to be used in exocentric perspective teleoperation scenarios. We conducted two user studies to test the task performance and the subjective experience of the gaze interaction methods in a virtual ground vehicle teleoperation task. The results showed that compared with a traditional joystick design, the Eye Stick led to a shorter driving distance and the Eye Click led to less task time, and the gaze interaction methods had performance advantages in more difficult mazes. After multiple task sessions, the participants reported that the gaze interaction methods and the traditional joystick were similar in terms of task workload, perceived learnability, and satisfaction; however, the perceived usability of the Eye Stick was not as good as the Eye Click and the traditional joystick. In conclusion, both the Eye Stick and the Eye Click are feasible and promising gaze interaction methods for teleoperation applications with task performance advantages; however, more research is needed to optimize their user experience design.
Article
Full-text available
Tele-operated rovers have been a feature of space exploration for decades. Ground control teams that operate these rovers generally comprise a robot operations team supported by science and robot engineering groups. But, in the future, astronauts will also remotely operate rovers. Several studies have proposed that astronauts should be able to control rovers from orbiting spacecraft such as the Deep Space Gateway (DSG). This concept of operations offers several benefits to human exploration. Firstly, it will enable astronauts to expand their sphere of influence beyond the confines of a spacecraft. Secondly, it will enable astronauts to safely perform surface work via an avatar. Thirdly, it will reduce the expenditure of life support consumables, and fourthly, it will spare astronauts from spending time on radiation-ravaged planetary surfaces. But integrating tele-operated rovers into human space exploration raises important questions. What system configurations are effective? Which modes of operation and control are most appropriate? When is it appropriate to rely (or not) on tele-operated rovers? The proposed research sought to answer the first two of these questions. It was designed to simulate three mission phases: pre-mission planning, site survey, and surface asset deployment. The study employed a derived terrain model located in the Lunares hab facility (located in Pila, Poland). Operators drove a Turtle rover through task sequences to survey sites while avoiding hazards/obstacles. This study simulated remote rover operations that assessed crew workload, crew situation awareness, robot asset acquisition, task sequence success, system issues, and rover performance. This study demonstrated basic competence in teleoperated rover driving, but more work must be conducted to produce a system that can behave reliably over many weeks and/or kilometers. Results indicated interactive monitoring is an effective strategy for crew-centric surface telerobotics. Safeguarded driving using this mode of operation enabled participants to perform each task successfully – success being measured by the metric of completing assigned tasks and completing the course. Participants maintained good situational awareness (SA) with low effort using interactive visualization of the rover state. From post-test debriefs it was determined participants maintained a high level of SA during operations and that the activity employed via the operator interface was a contributing factor to achieving these high levels. Since the test sessions were designed to be increasingly difficult in terms of task complexity it was expected that SA would decrease and task loading would increase across tasks and the data confirms this was the case.
Conference Paper
Full-text available
During the summer of 2009, a flexible path scenario for human and robotic space exploration was developed that enables frequent, measured, and publicly notable human exploration of space beyond low-Earth orbit (LEO). The formulation of this scenario was in support of the Exploration Beyond LEO subcommittee of the Review of U.S. Human Space Flight Plans Committee that was commissioned by President Obama. Exploration mission sequences that allow humans to visit a wide number of inner solar system destinations were investigated. The scope of destinations included the Earth-Moon and Earth-Sun Lagrange points, near-Earth objects (NEOs), the Moon, and Mars and its moons. The missions examined assumed the use of Constellation Program elements along with existing launch vehicles and proposed augmentations. Additionally, robotic missions were envisioned as complements to human exploration through precursor missions, as crew emplaced scientific investigations, and as sample gathering assistants to the human crews. The focus of the flexible path approach was to gain ever-increasing operational experience through human exploration missions ranging from a few weeks to several years in duration, beginning in deep space beyond LEO and evolving to landings on the Moon and eventually Mars.
Article
Full-text available
A novel concept is presented in this paper for a human mission to the lunar L2 (Lagrange) point that would be a proving ground for future exploration missions to deep space while also overseeing scientifically important investigations. In an L2 halo orbit above the lunar farside, the astronauts aboard the Orion Crew Vehicle would travel 15% farther from Earth than did the Apollo astronauts and spend almost three times longer in deep space. Such a mission would serve as a first step beyond low Earth orbit and prove out operational spaceflight capabilities such as life support, communication, high speed re-entry, and radiation protection prior to more difficult human exploration missions. On this proposed mission, the crew would teleoperate landers and rovers on the unexplored lunar farside, which would obtain samples from the geologically interesting farside and deploy a low radio frequency telescope. Sampling the South Pole-Aitken basin, one of the oldest impact basins in the solar system, is a key science objective of the 2011 Planetary Science Decadal Survey. Observations at low radio frequencies to track the effects of the Universe's first stars/galaxies on the intergalactic medium are a priority of the 2010 Astronomy and Astrophysics Decadal Survey. Such telerobotic oversight would also demonstrate capability for human and robotic cooperation on future, more complex deep space missions such as exploring Mars.
Conference Paper
Full-text available
Good situation awareness (SA) is especially necessary when robots and their operators are not collocated, such as in urban search and rescue (USAR). This paper compares how SA is attained in two systems: one that has an emphasis on video and another that has an emphasis on a three-dimensional map. We performed a within-subjects study with eight USAR domain experts. To analyze the utterances made by the participants, we developed a SA analysis technique, called LASSO, which includes five awareness categories: location, activities, surroundings, status, and overall mission. Using our analysis technique, we show that a map-centric interface is more effective in providing good location and status awareness while a video- centric interface is more effective in providing good surroundings and activities awareness.
Article
The Exploration Ground Data Systems (xGDS) project led by the Intelligent Robotics Group (IRG) at NASA Ames Research Center creates software tools to support multiple NASA-led planetary analog field experiments. The two primary tools that fall under the xGDS umbrella are the xGDS Web Tools (xGDS-WT) and Visual Environment for Remote Virtual Exploration (VERVE). IRG has also developed a hardware and software system that is closely integrated with our xGDS tools and is used in multiple field experiments called Gigapan Voyage. xGDS-WT, VERVE, and Gigapan Voyage are examples of IRG projects that improve the ratio of science return versus development effort by creating generic and reusable tools that leverage existing technologies in both hardware and software.
Conference Paper
,The Stepping Stones plan for human space exploration is a series of increasingly challenging exploration missions to develop technologies and demonstrate new capabilities that enable subsequent missions to push farther into space and explore for longer durations. Step by step, at a sustained pace, astronauts will journey beyond Low Earth Orbit (LEO) to the Moon, then to distant asteroids, and finally to the outer moon of Mars. At each destination, astronauts will address key science objectives relating to the formation of the solar system and the origins of life. The Stepping Stones plan demonstrates the Orion MultiPurpose Crew Vehicle’s capability beyond LEO and provides opportunities to perform and perfect telerobotic surface operations from orbit, space maneuvers near small bodies, and sample return. The missions will test and improve upon existing technologies for astronaut protection and health, forging the way for the future of human space flight.
Conference Paper
Future exploration missions to the Moon and Mars will require humans and robots to work in teams to explore and conduct science on planetary surfaces, and erect, maintain, and repair space-based infrastructure. A unified framework to optimally leverage the capabilities of humans and robots in space exploration, although not yet available, will be an invaluable tool for mission planning. Such a framework requires formal methods to represent human-robotic system architectures, define tasks, formulate common task-based metrics, and evaluate human-robotic system architectures. Our objective is to formulate a method to represent human-robotic system architectures to provide a basis for standard means of evaluating human-robotic systems against a common set of metrics. The representation of a human-robotic system must capture the information necessary to compare different teams of humans and robots performing the same task or set of tasks in terms of common metrics: productivity/effectiveness, cost or resources expended, reliability associated with successfully completing a task or set of tasks, safety of human agents, flexibility and robustness of a team architecture to changing environments and requirements. © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Article
Despite the many techniques developed for evaluating pilot workload in flight, subjective assessment by experienced pilots is still the most reliable method by far. This report describes the design and development - with the help of practicing test pilots - of a ten-point rating scale. The scale uses a decision tree similar to that used by the Cooper-Harper Handling Qualities scale, and is based on the concept of spare capacity. Examples are given of its use by a large number of pilots in various flight trails and workload studies. Keywords: Test construction; Workload; Pilots; Flight; Decision making; Decision aids; ratings.
Conference Paper
Three NASA centers are working together to address the challenge of operating robotic assets in support of human exploration of the Moon. This paper describes the combined work to date of the Ames Research Center (ARC), Jet Propulsion Laboratory (JPL) and Johnson Space Center (JSC) on a common support framework to control and monitor lunar robotic assets. We discuss how we have addressed specific challenges including time-delayed operations, and geographically distributed collaborative monitoring and control, to build an effective architecture for integrating a heterogeneous collection of robotic assets into a common work. We describe the design of the Robot Application Programming Interface Delegate (RAPID) architecture that effectively addresses the problem of interfacing a family of robots including the JSC Chariot, ARC K-10 and JPL ATHLETE rovers. We report on lessons learned from the June 2008 field test in which RAPID was used to monitor and control all of these assets. We conclude by discussing some future directions to extend the RAPID architecture to add further support for NASA's lunar exploration program.
Article
Simulated operations during a recent NASA robotic field test demonstrated realtime computation of performance metrics for human-robot interaction that includes adjustable autonomy.