ArticlePDF Available

Characteristics of indoor disaster environments for small UASs

Authors:

Abstract and Figures

This paper provides a formal analysis of indoor disaster environments that impact the design of small unmanned aerial systems (SUASs), their navigational algorithms, and their sensors. Four characteristics of a region of space: scale, degree of deconstruction, location of obstacles, and tortuosity are described. The analysis compares the value of these characteristics for Prop 133 at Disaster City® with twelve SUASs that have flown inside a disaster damaged building or other physical or computer simulated indoor space; the analysis normalizes the platform size. Eleven of the twelve systems were tested in more spacious regions (habitable) than the regions typified by Prop 133 (restricted maneuverability). Only one of the twelve systems was tested in a deconstructed environment; likewise only one testbed placed obstacles in equivalent configurations to those in Prop 133. The tortuosity of the testbeds was at best half of the tortuosity of Prop 133. The paper concludes that current obstacle avoidance and simultaneous localization and mapping algorithms vetted with those testbeds may not perform well in actual disaster environments.
Content may be subject to copyright.
Characteristics of Indoor Disaster Environments for
Small UASs
Siddharth Agarwal
Electrical and Computer
Engineering
Texas A&M University
College Station, TX 77843
Email: siddharth@tamu.edu
Robin R. Murphy
Center for Robot-Assisted
Search and Rescue
Texas A&M University
College Station, TX 77843
Email: murphy@cse.tamu.edu
Julie A. Adams
Electrical Engineering and
Computer Science
Vanderbilt University
Nashville, TN 37067
Email: julie.a.adams@vanderbilt.edu
Abstract—This paper provides a formal analysis of indoor
disaster environments that impact the design of small unmanned
aerial systems (SUASs), their navigational algorithms, and their
sensors. Four characteristics of a region of space: scale, degree
of deconstruction, location of obstacles, and tortuosity are de-
scribed. The analysis compares the value of these characteristics
for Prop 133 at Disaster City R
with twelve SUASs that have
flown inside a disaster damaged building or other physical or
computer simulated indoor space; the analysis normalizes the
platform size. Eleven of the twelve systems were tested in more
spacious regions (habitable) than the regions typified by Prop
133 (restricted maneuverability). Only one of the twelve systems
was tested in a deconstructed environment; likewise only one
testbed placed obstacles in equivalent configurations to those in
Prop 133. The tortuosity of the testbeds was at best half of the
tortuosity of Prop 133. The paper concludes that current obstacle
avoidance and simultaneous localization and mapping algorithms
vetted with those testbeds may not perform well in actual disaster
environments.
I. INTRODUCTION
The possibility of using small multi-rotor unmanned aerial
systems (SUASs) for surveying damage inside buildings and
structures is increasing. The successful flights by a University
of Pennsylvania/Tohoku University team [1] in a multi-story
building damaged by the 2011 Great Eastern Japan Earthquake
and the multi-university NIFTi team’s inspection of cathedrals
in Mirandola collapsed by the Finale Emilia Earthquake [2]
demonstrate the potential utility of SUAS for multi-story
buildings and processing facilities, such as Fukushima Daiichi.
However, the inability of SUAS to make progress in indoor
flights inspecting buildings in Biloxi damaged by Hurricane
Katrina in 2005 [3] and the Christchurch Catholic Basilica
damaged by the 2011 Christchurch, New Zealand, Earthquake
[4] act as reminders of remaining challenges.
Small UAS for flying indoors is fortunately an active area of
investigation [1], [5], [6], [7], [8], [9], [10], [11], [12], [13],
[14], [15]; however, advances in flying for normal, undam-
aged indoor environments may not be directly transferrable
to disaster response. Kinetic events, such as earthquakes,
tornadoes, hurricanes, industrial accidents, or explosions often
deconstruct interiors, while leaving the building compromised.
A mild earthquake may rearrange office furniture, knock over
bookcases, and cause ceiling fixtures to hang loose, while
leaving the structural elements, such as walls, ceilings, floors,
and pillars intact. A more severe event will have a higher
degree of deconstruction, collapsing walls and ceilings, de-
positing debris, and changing the overall layout of the building.
Indoor SUAS navigational algorithms for obstacle avoidance
and simultaneous localization and mapping (SLAM) depend
upon assumptions about the environment that influence the res-
olution of maps, the choice of sensors and sensor processing,
and the update frequency of control and sensing. Therefore,
having an accurate characterization of deconstructed indoor
environments is essential to developing indoor SUAS that can
fly in realistic disaster conditions.
The approach taken in this paper is to first define a
characterization of indoor deconstructed environments for a
SUAS in terms of scale, degree of deconstruction, severity of
obstacles, and tortuosity [4]. The Prop 133 at Disaster City R
is presented as a pictorial example of these characteristics for
an average case. While Prop 133 stages only one possible
scenario, the partial collapse of a multi-story office building,
it is a realistic representation used for training responders
and thus is helpful in visualizing the definitions. The paper
summarizes twelve studies of indoor SUASs and compares
the environmental characteristics explored by these systems,
versus the average disaster case. The comparison produces
conclusions as to remaining gaps in the state of the art in
indoor SUAS for disasters.
A. Definitions
This paper uses the definitions from Disaster Robotics
[4]. While these definitions were originally developed for
unmanned ground vehicles, they can be extended to the
three dimensional environments SUAS operate within. The
operational envelope for a SUAS is defined as a collection
of one or more regions. For example, in a multi-story office
building, a hallway is a distinct region from an office and a
stairway. The navigability of a specific region for a SUAS can
be described in terms of scale and traversability.
The scale of a region reflects the relationship of the size of
the agent Ato the size of the environment E[4]. An agent can
be a person or robot. A large environment, such as high bay
provides more space for a SUAS than a narrow hallway. To
quantify this, scale is given as the relative size of characteristic
dimension CD of the agent and environment. The Acd is
the largest single dimension affecting SUAS navigation. For
example, a multi-rotor SUAS may have an overall platform
size with a diameter of 0.5m in the horizontal plane with
cameras and payloads protruding 0.2m, and a constraint that
the SUAS is never allowed closer than 0.3m to an obstacle.
Therefore, the maximum dimension is 0.5m+ 0.3m= 0.8m;
thus, Acd = 0.8m. Note that the Acd is the equivalent
of reducing a SUAS to a sphere. The Ecd is the nominal
minimum dimension of the environment affecting navigation.
For a hallway, it is the average width, as obstacles intruding
into the hallway are rare. An office may have a smaller Ecd,
where the furniture is arranged to allow a human to walk
through, but with less free space than in a hallway and a lower
ceiling.
The intrinsic navigability of a region based on scale can
be categorized as one of three indoor regimes in [4]. In
the habitable regime, Ecd >2Acd, the agent can move
freely through the environment. For a human, this regime
represents “normal” interior spaces designed for people to
work and live in that have not been altered by a kinetic
disaster. For a SUAS with a Acd about the width of a
person, a human habitable space will be the same as a SUAS
habitable space. A SUAS may be deployed into a habitable
environment if there was a chemical, radiological, or biological
incident where human movement was restricted by safety
procedures or personal protection gear, such as the use of
UGVs at the Fukushima Daiichi nuclear emergency. In the
restricted maneuverability regime, Ecd <2Acd, the agent
can still move in the environment, but that movement is
restricted by the much narrower spaces. The environment may
be naturally small, such as a sewer pipe; however, the more
interesting case for disasters are human habitable environments
that have become deconstructed from normal dimensions. A
partially collapsed building from a kinetic event, such as an
earthquake or explosion that a responder can walk through,
though perhaps have to bend over or squeeze through, is an
example of a restricted maneuverability regime. Robots for
surface entry into mine disasters or parking garage collapses
function in this regime. In the third indoor regime, the agent
is burrowing into the environment and working at a granular
level, Ecd < Acd. It is not possible for an SUAS to displace
material and create space for itself, so only the habitable and
restricted maneuverability regimes are discussed.
The traversability of a specific region by a SUAS can be
decomposed into four primary environmental characteristics.
The verticality of the region refers to the nominal nature
of flight. A stairwell region has high verticality, while a
hallway may have a high ceiling, but in a building the SUAS
will be primarily flying horizontally through a region, not
vertically. The degree of deconstruction of an indoor region
reflects the condition of the structural elements, essentially
are the walls and ceilings still orthogonal and in place. The
severity of obstacles captures the number and size of obstacles
that temporarily reduce the Ecd and may require obstacle
(a)
Room 1 Room 2
Room 3 with Ceiling Lean-to
Collapse
(b)
Fig. 1: Prop 133 at Disaster City R
. a) View of the federal
building component and b) the floor plan for the first and
second floors.
avoidance. If the environment is essentially a path through
nearly continuous obstacles, the free space between obstacles
becomes Ecd. A normal habitable space will have very few
navigational obstacles, as human spaces are designed for
people to move and work in. A kinetic event may deconstruct
the habitable space by creating debris and hanging obstacles.
The deconstruction and severity of obstacles in turn leads to
the tortuosity of a region. Tortuosity represents the meters
between turns, including changes in altitude, in the region for
navigation; it does not include yawing to provide sensor views.
In addition, the performance of an indoor SUAS will also
be influenced by other secondary components. The lighting
of a region is likely to be variable, which can influence
sensing. The surface properties of the regions refer to the
materials present in the environment. These materials can
impact sensing, for example, acoustic ceiling tiles absorb
ultrasound signals thereby distorting the response.
B. Disaster City R
Since SUASs have been used only four times for surveying
the interior of damaged buildings [4], those data sets are
too limited to project the broad set of regions for indoor
flight. However, Prop 133 at Disaster City R
was designed
to represent an average expected state of a damaged multi-
story commercial building. Disaster City R
is a complex of
props designed by professional trainers, who are themselves
responders, to accurately represent physical conditions that
urban search and rescue (US&Rs) teams will experience for a
range of disasters. It is owned by the Texas A&M Engineering
Extension Service and is used to train over 80,000 humans
and canines annually, including Federal Emergency Manage-
ment Agency (FEMA) US&R teams. None of the spaces are
specifically designed for robots. Prop 133, shown in Fig. 1
is an exemplar of realistic deconstructed human habitable or
human restricted maneuverability indoor office building and
thus is a projection of what a SUAS will encounter. Portions
of the prop follow the floor plan and room size of a multi-
story government office building, such as the standing portions
of the Alfred P. Murrah Federal Building destroyed at the
Oklahoma City bombing in 1995. The prop consists of six
office-sized rooms on two floors, with four of the rooms
structurally intact and two rooms part of a lean-to collapse.
(a)
(b)
(c) (d)
Fig. 2: Interiors of Prop 133 at Disaster City R
. a) Furniture up
to heights of 1m to 2.5m scattered around the floor, b) Wires,
open ventilators, metal frames hanging from the ceiling at 2m
to 3m, c) Collapsed ceiling and wall, and d) Accumulated
debris due to breaking of loose material.
It should be noted that Prop 133 does not have the carpets,
wallpaper, acoustic tiles, or other organic materials normally
found in an office building, as those furnishings will mold in
the outdoors; therefore, Prop 133 may be less challenging for
robotic navigation and sensing than the partial collapse of an
actual office building.
II. REL ATE D WORK
Twelve systems were identified that have experimented with
indoor SUASs and four systems were intended for application
to search and rescue [6], [12], [1], [9]. The twelve systems
were evaluated in one or more of three testbeds: computer
simulation [12], [13], [14], [15], [11], physical - general indoor
environments [6], [9], [5], [7], [8], [10], [13], [14], [15], or
in a building that had experienced an actual disaster [1]. A
summary of the specifications of the twelve robots, the scale of
the testbed, the type of testbed, the type of regions represented
in the testbed, and the nominal flight altitude is provided in
Table I.
A. Computer Simulation
Five SUASs were evaluated using computer simulated
testbeds. Two of the five simulated environments emulated
habitable scale open spaces. Jongho and Youdan simulated
open spaces contained parallelepipeds, with sides of 1m and
3m [11]. Stowers et al. used a large number of blocks with
heights up to 3m [15], while Al Redwan Newaz et al. sim-
ulated an office space containing multiple objects positioned
on the floor with varying heights [12]. Two others simulated
a habitable scale office space. Fossel et al. simulated an office
space containing orthogonal walls, an open space containing
vertical pillars, and a collection of laboratory and office
space with a tilted wall [14]. The fifth system simulated a
restricted maneuverability office space [13]. The environments
represented randomly generated areas ranging in size from (25
x 25 x 3)m to (50 x 50 x 3)m. 20% of the environmental area
contained floor to ceiling walls and randomly placed obstacles,
such as boxes projecting from the floor up to a random height,
and fixed width beams mounted at random heights.
B. Physical - General Indoor Environment
Nine systems were evaluated in a physical - general indoor
testbed, with eight systems flying in habitable scale spaces and
only one in a restricted maneuverability scale space.
Eight of the nine systems were evaluated in habitable scale
environments [6], [9], [5], [7], [8], [10], [14], [15], while the
remaining system was evaluated in a restricted maneuverability
scale environment. Four of the environments are classified as
open space [9], [5], [7], [10]. Masanori et al.’s open space
environment contained three 1m cylinders and a horizontal
cross section that protruded from one cylinder at a height of
0.8m from the ground [5]. Torantani et al. operated in a testbed
with a rectangular obstacle 1m wide and 0.8m high, right at
the nominal flight altitude [7]. Suzuki et al.’s SUAS flew at
a nominal altitude of 1.5m, while avoiding a whiteboard of
similar height [9]. Ahrens et al.’s open space testbed contained
a 1m long cylindrical pole and a cuboid, similar to a bar
stool [10]. The SUAS flew at a nominal altitude of 0.5m,
as inferred from the paper, but flew up to 1.5m to avoid the
obstacles. The second most common habitable scale evaluation
testbed was an office space [6], [8], [14]. Li et al.’s SUAS
planned paths to allow the vehicle to avoid tables by flying
underneath them from one side to the other [6], while Fossel
et al.’s environment contained “low lying” tables, cabinets,
and benches [14]. Grzonka et al.’s SUAS flew at a maximum
of 1.5m, while avoiding 48cm high chairs and 77cm high
tables with other obstacles [8]. Stowers et al.’s laboratory
environment contained two large benches with instruments on
them, for a total height of 3m above the floor [15]. Their
SUAS flew at a nominal altitude of 1.5m, as inferred from
the paper. A 41m hallway provided a second environment in
which Grzonka et al.’s SUAS flew at a nominal altitude of
0.5m, as inferred from the paper.
MacAllister et al.’s [13] SUAS was the only system evalu-
ated for a restricted maneuverability scale environment. Their
environment was a collection of hallways, offices and open
spaces containing obstacles of random heights that were placed
on the floor and a horizontal bar placed at 0.7m above the
ground.
C. Search and Rescue
Four systems explicitly discussed the search and rescue
applications. Michael et al. [1] conducted evaluations in a
building on Tohoku University’s campus containing hallways
and offices. The building had been damaged by an earthquake,
but was still accessible to humans and was at the habitable
scale. Al Redwan Newaz et al. [12] considered a surveillance
and recovery mission after nuclear disasters or severe accidents
in industrial areas as an application, and tested a SUAS in
computer simulated habitable office space. Two systems [6],
[9] were evaluated in physical - general indoor staged office
and open spaces, respectively.
III. ENVIRONMENTAL CHARACTERISTICS
The environmental characteristics influencing the navigabil-
ity of a region can be divided into three groups: the scale
and degree of deconstruction, which captures the state of
the structure; the severity of obstacles and tortuosity, which
captures the impact of the deconstructed structure and dam-
aged furnishings; and other characteristics that affect sensing.
Table I provides an overview of the twelve surveyed SUASs
and the environmental characteristics for which they were
evaluated.
A. Scale and Degree of Deconstruction
The scale of the average size of the twelve SUASs, with
respect to the interior of Prop 133 is in the restricted maneuver-
ability range. If the floor plan is used to compute the Ecd, the
scale represents the habitable range, as Ecd >2Acd, where the
room is 6m and the SUAS is 0.6m, thus 6>2(0.6). However,
Figs. 2a and 2b show that the damage to fixtures and furnish-
ings reduce the actual free space to Ecd 1.5Acd, which falls
into the restricted maneuverability range of Ecd <2Acd.
As seen in the scale column of Table I, only two of the
twelve systems were deployed in a restricted maneuverability
(2Acd > Ecd >1.5Acd) environment, comparable to Prop
133. The remaining systems were evaluated or deployed in
environments within the habitable scale.
Prop 133 also illustrates different degrees of deconstruction
to the structural elements. Rooms 1 and 2 on both floors, as
seen in Figs. 2b and 2d have relatively minor deconstruction,
given that the walls, ceiling, and floor are still orthogonal,
though they exhibit holes or damage. Fig. 2c shows major de-
construction, where a ceiling has collapsed and the supporting
pillars are clearly damaged and no longer uniform.
Only one of the twelve SUASs was evaluated in a decon-
structed environment. Michael et al. deployed in a damaged
building [1], but with only with a minor degree of deconstruc-
tion compared to Prop 133. The other three systems proposed
for search and rescue missions [6], [12], [9] flew in regions
with no visible deconstruction. The remaining eight general
indoor SUASs operated in regions with no damage.
B. Severity of Obstacles and Tortuosity
Severity refers to the content of the environment, i.e.,
the number and types of obstacles an agent may encounter.
However, what is an obstacle for a human or a ground robot,
may not be an obstacle for a SUAS. Therefore, this paper rates
severity based on location:
1) Obstacles on the ground, below the nominal flying zone
(see Fig. 2a).
2) Obstacles on the ground, up to the nominal flying zone
(see Fig. 2a).
3) Obstacles hanging from the ceiling, in the nominal flying
zone (see Figs. 2a and 2b).
4) Obstacles hanging from the ceiling, above the nominal
flying zone (see Figs. 2a and 2b).
All four categories of obstacle severity/locations exist in the
environment at Prop 133. Altitude does not necessarily reduce
the obstacles. If a SUAS were to fly in Prop 133 at an altitude
of 1.17m (the average for flying in offices and hallways), it
will encounter the same categories of obstacles if it flew at
2.08m (the average for in open spaces).
Only one surveyed system, MacAllister et al. [13], was
evaluated in a testbed encompassing all four categories of
obstacle severity/location as found at Prop 133, but only in
computer simulation. The actual disaster deployment, Michael
et al. [1], encountered three types of obstacles, but not those
hanging from the ceiling into nominal flying zone. Al Redwan
Newaz et al. [12] simulated two categories of obstacles, while
the other two systems [6], [9] only tested with obstacles on
the ground up to the nominal flying zone. This observation
suggests that the obstacle placement in testbeds is not a good
predictor of whether a SUAS will be able to fly indoors during
a disaster.
Tortuosity is calculated as the number of turns taken by
the SUAS per unit distance. For example, if the SUAS takes
5 turns to avoid obstacles over a linear distance of 10m,
then the tortuosity is 5/10 = 0.5. The tortuosity at Prop
133 is estimated to be 1.0, i.e., 1 turn per meter. A low
tortuosity indicates that the frequency of obstacle avoidance is
low and the environment is comparatively easier to navigate,
as opposed to one with higher tortuosity.
All three types of spaces in Table II have a tortuosity much
lower than the tortuosity of Prop 133. The maximum tortuosity
in computer simulation (0.5), physical - general indoor staged
testbeds (0.31), physical - general indoor natural testbeds (0.1),
and actual disasters (0.6), suggests that the evaluation testbeds
are not sufficiently representative of actual disasters. A SUAS
that performs well in these testbeds may not have the agility
to make a higher frequency of turns and altitude changes.
C. Other Environmental Characteristics
Scale, degree of deconstruction, severity of obstacles, and
tortuosity are not the only environmental characteristics that
impact robot operations. Lighting conditions and surface
properties are two characteristics that have been observed at
disasters [4]. The conditions are not adequately captured in
the photographs of Prop 133, but merit further discussion. The
Prop 133 exhibited natural, non-uniform and low light condi-
tions, due to collapsed ceilings and holes in walls, shadows
due to obstacles, and lack of windows. Cameras do not work
No. Author SUAS Diameter (m) Scale Testbed Region Nominal Altitude (m)
1 Masanori et al., 2013 Habitable Physical - General
Indoor (Staged)
Open Space 0.7
2 Li et al., 2013 0.57 Habitable Physical - General
Indoor (Staged)
Office
3 MacAllister et al., 2013 Restricted Man. Physical - General
Indoor (Staged)
Collection of hallways,
offices and open space
0.7
Computer Office
4 Jongho et al., 2013 Habitable Computer Open Space 7.0
2.0
5 Al Redwan Newaz et al., 2013 Habitable Computer Office
6 Fossel et al., 2013
0.73
Habitable
Physical - General
Indoor (Natural)
Office
Computer
Collection of lab and
office
Open Space
Lab
7 Toratani et al., 2013 0.54 Habitable Physical - General
Indoor (Staged)
Open Space 0.8
8 Grzonka et al., 2013 Habitable Physical - General
Indoor (Natural)
Hallway 0.5
Office
9 Michael et al., 2012 0.65 Restricted Man. Actual Disaster
Environment
Collection of Offices
and Hallways
2.0
10 Stowers et al., 2011 Habitable Physical - General
Indoor (Staged)
Lab 1.5
Computer Open Space
11 Suzuki et al., 2010 1.0 Habitable Physical - General
Indoor (Staged)
Open Space 1.5
12 Ahrens et al., 2009 0.54 Habitable Physical - General
Indoor (Staged)
Open Space 0.5
TABLE I: A summary of the reviewed systems, including SUAS size, environmental scale, space classification and nominal
altitude. (– means “unable to be inferred”).
well in dim lighting and may need an artificial light source,
while the Kinect does not work in high luminesce conditions,
as shown in [16]. The building materials for Prop 133 include
metal, glass and sharp edges that may scatter active sensors,
such as LIDAR and Ultrasound. Carpets, cloth, soundproof
tiles and partitions typically found in office buildings may
absorb sound signals. Furthermore, suspended dust due to
debris and loose building materials may affect the visibility
and make sensors less effective or even non-functional.
IV. CONCLUSIONS
The interest in using SUAS technology by urban search
and rescue teams continues to grow; however, the unique
situations in which SUASs will be considered a valuable
tool place constraints on the system and algorithm design.
This paper focuses on providing formal definitions of the
environmental constraints based on extending definitions for
disaster response unmanned ground systems. These definitions
are critical for purposes of developing and evaluating SUAS
technology for and within representative environments that
will lead to transferring the technology to disaster response
personnel. Kinetic disasters modify building structures and
contents, thus requiring a SUAS to meet very different con-
straints for successful deployment.
The formal definitions characterize the environment’s scale,
space type, obstacle severity and tortuosity as well as the
SUAS’ nominal flight altitude. These definitions were used
to analyze and classify twelve existing SUASs evaluated for
deployment in simulation or in actual indoor environments.
These results were compared to a representative environment
used to train urban search and rescue teams, Prop 133 at
Disaster City R
. While one of the twelve systems surveyed
was deployed in a building damaged by an earthquake, and
three other systems claimed applicability to search and rescue
domains, none of the systems were evaluated in environments
representative of Prop 133. While it is not possible to say
with certainty that the analyzed SUASs cannot fly in the
Prop 133 region, it is clear that their obstacle avoidance and
simultaneous localization and mapping algorithms have not
been evaluated for such environments. It is also likely, al-
though this paper does not analyze this aspect, that the sensors
supporting these algorithms will likely produce inaccuracies
for environments represented by Prop 133. As such, it is
necessary to consider the environmental characteristics defined
in this paper when designing the hardware and software
specifications for new SUASs intended for indoor urban search
and rescue environments.
V. ACKNOW LED GEM ENT S
Portions of this work were supported by a SEC Travel
grant and NSF Grant IIS-1143713 EAGER: Shared Visual
Common Ground in Human-Robot Interaction for Small Un-
manned Aerial Systems. The authors thank the Texas A&M
Engineering Extension Service for access to their facilities.
No Author from Table I Testbed Severity: Obstacle Location Tortuosity
Ground
to below
nominal
Ground
up to
nominal
Ceiling
into
nominal
Ceiling
to above
nominal
1 Masanori et al., 2013 Physical - General
Indoor (Staged)
X0.5
2 Li et al., 2013 Physical - General
Indoor (Staged)
X
3 MacAllister et al., 2013 Physical - General
Indoor (Staged)
XXX 0.18
Computer XXXX0.4
4 Jongho et al., 2013 Computer X0.14
X0.2
5 Al Redwan Newaz et al., 2013 Computer
X X
X X
X X
6 Fossel et al., 2013
Physical - General
Indoor (Natural)
X X
Computer
X X
X
X X
7 Toratani et al., 2013 Physical - General
Indoor (Staged)
X0.3
8 Grzonka et al., 2013 Physical - General
Indoor (Natural)
X X 0.1
X X
9 Michael et al., 2012 Actual Disaster
Environment
XX X0.6
10 Stowers et al., 2010 Physical - General
Indoor (Staged)
X X
Computer X X 0.57
11 Suzuki et al., 2010 Physical - General
Indoor (Staged)
X
12 Ahrens et al., 2009 Physical - General
Indoor (Staged)
X0.5
TABLE II: Summary of Severity of Obstacles and Tortuosity. (– means “unable to be inferred”)
REFERENCES
[1] N. Michael, S. Shen, K. Mohta, Y. Mulgaonkar, V. Kumar,
K. Nagatani, Y. Okada, S. Kiribayashi, K. Otake, K. Yoshida,
K. Ohno, E. Takeuchi, and S. Tadokoro, “Collaborative mapping of an
earthquake-damaged building via ground and aerial robots, Journal of
Field Robotics, vol. 29, no. 5, pp. 832–841, 2012. [Online]. Available:
http://dx.doi.org/10.1002/rob.21436
[2] G.-J. Kruijff, V. Tretyakov, T. Linder, F. Pirri, M. Gianni, P. Papadakis,
M. Pizzoli, A. Sinha, E. Pianese, S. Corrao, F. Priori, S. Febrini, and
S. Angeletti, “Rescue robots at earthquake-hit mirandola, italy: a field
report,” in IEEE International Symposium on Safety, Security and Rescue
Robotics, pp. 1–8.
[3] K. Pratt, R. Murphy, S. Stover, and C. Griffin, “Conops and autonomy
rcommendations for vtol suass based on hurricane katrina operations,”
Journal of Field Robotics, vol. 26, no. 8, pp. 636–650, 2009.
[4] R. R. Murphy, Disaster Robotics. MIT Press, 2014.
[5] H. Masanori, N. Hideyuki, S. Johan, and B. Kevin, Optimal
Trajectory Generation and Tracking Control of a Single Coaxial
Rotor UAV, ser. Guidance, Navigation, and Control and Co-
located Conferences. American Institute of Aeronautics and
Astronautics, 2013, doi:10.2514/6.2013-4531. [Online]. Available:
http://dx.doi.org/10.2514/6.2013-4531
[6] Q. Li, D.-C. Li, Q.-f. Wu, L.-w. Tang, Y. Huo, Y.-x. Zhang,
and N. Cheng, “Autonomous navigation and environment modeling
for mavs in 3-d enclosed industrial environments, Computers in
Industry, vol. 64, no. 9, pp. 1161–1177, 2013. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0166361513001243
[7] D. Toratani, T. Higuchi, and S. Ueno, “Terrain following flight of
uav using information amount feedback,” in SICE Annual Conference
(SICE), 2013 Proceedings of, Conference Proceedings, pp. 1503–1508.
[8] S. Grzonka, G. Grisetti, and W. Burgard, “A fully autonomous indoor
quadrotor, Robotics, IEEE Transactions on, vol. 28, no. 1, pp. 90–100,
2012.
[9] R. Suzuki, T. Matsumoto, A. Konno, Y. Hoshino, K. Go, A. Oosedo,
and M. Uchiyama, “Teleoperation of a tail-sitter vtol uav,” in Intelligent
Robots and Systems (IROS), 2010 IEEE/RSJ International Conference
on, Conference Proceedings, pp. 1618–1623.
[10] S. Ahrens, D. Levine, G. Andrews, and J. P. How, “Vision-based
guidance and control of a hovering vehicle in unknown, gps-denied
environments, in Robotics and Automation, 2009. ICRA ’09. IEEE
International Conference on, Conference Proceedings, pp. 2643–2648.
[11] P. Jongho and K. Youdan, Obstacle Detection and Collision Avoidance
of Quadrotor UAV Using Depth Map of Stereo Vision, ser. Guidance,
Navigation, and Control and Co-located Conferences. American
Institute of Aeronautics and Astronautics, 2013, doi:10.2514/6.2013-
4994. [Online]. Available: http://dx.doi.org/10.2514/6.2013-4994
[12] A. Al Redwan Newaz, F. A. Pratama, and C. Nak Young, “Exploration
priority based heuristic approach to uav path planning,” in RO-MAN,
2013 IEEE, Conference Proceedings, pp. 521–526.
[13] B. MacAllister, J. Butzke, A. Kushleyev, H. Pandey, and M. Likhachev,
“Path planning for non-circular micro aerial vehicles in constrained envi-
ronments,” in Robotics and Automation (ICRA), 2013 IEEE International
Conference on, Conference Proceedings, pp. 3933–3940.
[14] J. Fossel, D. Hennes, D. Claes, S. Alers, and K. Tuyls, “Octoslam: A 3d
mapping approach to situational awareness of unmanned aerial vehicles,
in Unmanned Aircraft Systems (ICUAS), 2013 International Conference
on, Conference Proceedings, pp. 179–188.
[15] J. Stowers, M. Hayes, and A. Bainbridge-Smith, “Beyond optical flow
- biomimetic uav altitude control using horizontal edge information,” in
Automation, Robotics and Applications (ICARA), 2011 5th International
Conference on, Conference Proceedings, pp. 372–377.
[16] J. Suarez and R. Murphy, “Using the kinect for search and rescue
robotics,” in Safety, Security, and Rescue Robotics (SSRR), 2012 IEEE
International Symposium on, Nov 2012, pp. 1–2.
... Other works that did not directly formulate risk but characterized unstructured or confined environments are [3] and [31]. Characteristics including access elements and tortuosity were proposed, which haven't been but could be used as explicit risk elements, especially in unstructured or confined environments. ...
... .., n (3.12) Figure 3.8: Tortuosity is the number of turns per unit distance, in both horizontal and vertical plane. As shown in the figure, three turns are taken in order to navigate through the 6m course, so the tortuosity value is 3/6=0.5 (adapted from [3]). ...
... Risk associated with turning comes from the ecological idea of tortuosity ( Fig. 3.8): a metric calculated as the number of turns taken by the robot per unit distance [3,31]. ...
Preprint
Full-text available
This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two different
... Even with data-driven approaches, researchers estimated potential risk of a certain state based on historical record, including ocean Automated Identification System (AIS) [9], [10] and traffic data [11]. Assuming risk as a function of only state neglects those risk elements caused by the execution of an entire path [1], [12], such as path tortuosity (number of turns needed to traverse path), which this research aims at including. ...
... Path A goes through a series of safe states (0.1 risk each) since the distance to closest obstacle is maximized along the State-Dependent Risk of Two Path Options: based on simple summation of state-dependent risk only, path B is much riskier than path A. However, extra tortuosity adds path-dependent risk to path A. entire path. However, taking six turns entails taking extra risk [1], [12]. Path B is safe in terms of a straight and easy path, but it has to go through risky states which are close to obstacles. ...
Preprint
Full-text available
This paper develops a path planner that minimizes risk (e.g. motion execution) while maximizing accumulated reward (e.g., quality of sensor viewpoint) motivated by visual assistance or tracking scenarios in unstructured or confined environments. In these scenarios, the robot should maintain the best viewpoint as it moves to the goal. However, in unstructured or confined environments, some paths may increase the risk of collision; therefore there is a tradeoff between risk and reward. Conventional state-dependent risk or probabilistic uncertainty modeling do not consider path-level risk or is difficult to acquire. This risk-reward planner explicitly represents risk as a function of motion plans, i.e., paths. Without manual assignment of the negative impact to the planner caused by risk, this planner takes in a pre-established viewpoint quality map and plans target location and path leading to it simultaneously, in order to maximize overall reward along the entire path while minimizing risk. Exact and approximate algorithms are presented, whose solution is further demonstrated on a physical tethered aerial vehicle. Other than the visual assistance problem, the proposed framework also provides a new planning paradigm to address minimum-risk planning under dynamical risk and absence of substructure optimality and to balance the trade-off between reward and risk.
... The consequences of these disasters result in the loss of human lives, many of which cannot be assisted in time because they are trapped in hard-to-reach areas. Post-disaster environments (PDE) are characterized by much debris and structures at risk of collapse, which hinder rescue brigades' inspection and displacement (Agarwal et al., 2014;BBC, 2016BBC, , 2021Del Moral & Walker, 2007;Times, 2016). One of the main goals that have motivated the constant growth of the search and rescue robotics line is to increase the rate of victims rescued or assisted in time (Delmerico et al., 2019;Tadokoro, 2009). ...
Article
Full-text available
In recent years, exploration tasks in disaster environments, victim localization and primary assistance have been the main focuses of Search and Rescue (SAR) Robotics. Developing new technologies in Mixed Reality (M-R) and legged robotics has taken a big step in developing robust field applications in the Robotics field. This article presents MR-RAS (Mixed-Reality for Robotic Assistance), which aims to assist rescuers and protect their integrity when exploring post-disaster areas (against collapse, electrical, and toxic risks) by facilitating the robot’s gesture guidance and allowing them to manage interest visual information of the environment. Thus, ARTU-R (A1 Rescue Tasks UPM Robot) quadruped robot has been equipped with a sensory system (lidar, thermal and RGB-D cameras) to validate this proof of concept. On the other hand, Human-Robot interaction is executed by using the Hololens glasses. This work’s main contribution is the implementation and evaluation of a Mixed-Reality system based on a ROS-Unity solution, capable of managing at a high level the guidance of a complex legged robot through different interest zones (defined by a Neural Network and a vision system) of a post-disaster environment. The robot’s main tasks at each point visited involve detecting victims through thermal, RGB imaging and neural networks and assisting victims with medical equipment. Tests have been carried out in scenarios that recreate the conditions of post-disaster environments (debris, simulation of victims, etc.). An average efficiency improvement of 48% has been obtained when using the immersive interface and a time optimization of 21.4% compared to conventional interfaces. The proposed method has proven to improve rescuers’ immersive experience of controlling a complex robotic system.
... Tortuosity is calculated as the number of turns taken by the robot per unit distance [38]. The test for tortuosity on snake robots needs to be distinguished between two types of motion: lateral or longitudinal, because cross-section of the locomotor is completely different in different motion modes (the whole body length for lateral and the width of one individual module for longitudinal motion). ...
Article
This article reviews the state of the art in evaluating snake robots for small spaces such as a collapsed building where the snake is either locomoting in restricted maneuverability spaces, such as narrow pipes or tunnels, or pushing through granular regions, such as dirt and rubble. It makes recommendations on designing a testbed that can enable a comprehensive evaluation of a snake robot’s overall capability and an objective comparison of different snakes. A survey of 31 papers reveals that 20 testbeds were used to test snake robots in restricted maneuverability environments. All of those were built specifically to test a particular snake robot rather than for comparison with other snake robots, but each offers insights into designing comprehensive, comparative testbeds. The article analyzed these 20 testbeds in terms of how well they addressed the previously established disaster robotics metrics of scale (a dimensionless number) and four traversability elements, i.e. verticality, tortuosity, accessibility elements, and surface properties. This review suggests that two kinds of general testbeds are in need for the snake robot community: (1) a testbed with high physical fidelity for measuring suitability for a target application, and (2) a testbed which provides a dimensionless comparison of different snake robots. The review is expected to benefit the community in several ways. It can help form a consensus on a suite of metrics and test methods to incorporate into a testbed for evaluating and comparing different types of snake robots and capturing the performance of snake robots in more realistic work envelopes. The metrics and test methods can also pro-actively inform snake robot design as they offer more formally quantified work envelopes, thus accelerating technology transfer. The use of scale and traversability is expected to be applicable to robots in general.
Article
This article surveys 21 studies of how ethologists characterize the environment for arthropods, reptiles, mammals, and birds traversing above ground, below ground, and burrowing in order to provide insights in selecting or designing a robot for a complex work envelope, for example, the 2018 Thailand Cave rescue. Roboticists are currently forced to rely on empirical expertise to select or design robots and to construct ad hoc testbeds for expected environments due to the lack of comprehensive quantitative metrics. Fortunately, ethologists have been grappling with how to quantify environmental factors that impact the traversability of different animals. That community has collectively identified 21 characteristics which this article discusses and groups into a novel taxonomy of three broad categories: local navigational constraints, surface properties, and global layout properties. One conclusion is that the set of appropriate characteristics for a specific environment depends on the scale of the environment to the agent. The article also makes four recommendations to aid roboticists in a) selecting a particular robot suitable for the environmental characteristics, b) building testbeds that are more representative of the target environment or to objectively compare different robotics, and c) collecting data about an environment for use cases or work analyses. It also discusses the limitations of the ethological studies for robotics and the remaining gaps.
Article
Thirty-five papers from the ethological literature were surveyed with the perception and reaction of flying animals to autonomous navigation tasks organized and analyzed using a schema theoretic framework. Flying animals are an existence proof of autonomous collision-free flight in unknown and disordered environments. Because they successfully avoid obstacles, self-orient, and evade predators and capture prey to survive, the collected information could inform the design of biologically-inspired behaviors for control of a small unmanned aerial system (SUAS) to improve the current state-of-the art in autonomous obstacle avoidance. Five observations were derived from the surveyed papers: sensing is done by vision in lighted scenarios and sonar in darkness, one sensor is always dominant, adaptive sensing is beneficial, no preference was identified for lateral versus vertical avoidance maneuvers, and reducing speed is consistently seen across species in response to objects in the flight path. Additionally, the questions of defining clutter and scale of speed reduction left unanswered by the literature were discussed. Finally, three rules for control of a SUAS in an unknown environment that restricts maneuverability were identified. These are the distance to begin maneuvers to avoid an obstacle in the flight path, the direction to adjust the flight path, and the role of centered flight in determining the adjustment.
Article
Full-text available
In many applications, the industrial environments are typically 3-D indoor spaces enclosed by shell style structures, which are highly complex with known or unknown non-convex obstacles. GPS signal is unreliable or even unavailable inside, which poses significant technical challenges for the state estimation of micro aerial vehicles (MAVs) performing exploration and modeling tasks in such environments. In this paper, requirements and challenges for 3-D enclosed industrial environments exploration are analyzed firstly, and then state-of-art developments of MAV systems, environment modeling, visual navigation and guidance technologies are reviewed. A robust RGB-D odometry is introduced into the system to provide airborne 6-DOF state estimates of the MAV, which are fused with inertial measurements. Then the fused state information is used to assist the RGB-D based real time 3-D environment modeling. An improved closed-loop RRT based path planning approach (BI-RRT) is developed for information-efficient environment explorations. A flight experimental platform is constructed and the proposed system is validated in flight experiments.
Conference Paper
Full-text available
In May 2012, two major earthquakes occurred in the Emilia-Romagna region, Northern Italy, followed by further aftershocks and earthquakes in June 2012. This sequence of earthquakes and shocks caused multiple casualties, and widespread damage to numerous historical buildings in the region. The Italian National Fire Corps deployed disaster response and recovery of people and buildings. In June 2012, they requested the aid of the EU-funded project NIFTi, to assess damage to historical buildings, and cultural artifacts located therein. To this end, NIFTi deployed a team of humans and robots (UGV, UAV) in the red-area of Mirandola, Emilia-Romagna, from Tuesday July 24 until Friday July 27, 2012. The team worked closely together with the members of the Italian National Fire Corps involved in the red area. This paper describes the deployment, and experience.
Conference Paper
Full-text available
The focus of this paper is on situational awareness of airborne agents capable of 6D motion, in particular multi-rotor UAVs. We propose the fusion of 2D laser range finder, altitude, and attitude sensor data in order to perform simultaneous localization and mapping (SLAM) indoors. In contrast to other planar 2D laser range finder based SLAM approaches, we perform SLAM on a 3D instead of a 2D map. To represent the 3D environment an octree based map is used. Our scan registration algorithm is derived from Hector SLAM. We evaluate the performance of our system in simulation and on a real multirotor UAV equipped with a 2D laser range finder, inertial measurement unit, and altitude sensor. The results show significant improvement in the localization and representation accuracy over current 2D map SLAM methods. The system is implemented using Willow Garage's robot operating system.
Conference Paper
Full-text available
The Microsoft Kinect has been used prolifically in robotics applications including rescue robotics since its introduction in November 2010, but its limitations have seldom been addressed. Although the low cost depth sensor is an attractive option for use in robot navigation, mapping, and human robot interaction, its poor performance in bright sunlight makes it generally unsuitable for outdoor work. This paper briefly surveys the Kinect's use in rescue robotics and similar applications, and highlights the associated challenges.
Conference Paper
Collision avoidance scheme of a quadrotor unmanned aerial vehicle using stereo vision sensor is proposed. Mathematical model of the quadrotor is performed, and under-actuated problem of the quadrotor is treated by introducing virtual inputs. Stereo vision is used to obtain depth map information, which is utilized to detect an obstacle. Collision cone approach is adopted to avoid collision between the quadrotor and the detected obstacle. Probability of the collision is computed by utilizing the relationship between the velocity vector of the quadrotor and the collision cone. The location and size information of the detected obstacles are accumulated to build the circumscribed spheres of the obstacles, which are used to eliminate a possibility of collision when the stereo vision does not detect any obstacle in the depth map of the current image. Multiple obstacles are also dealt by creating clusters in the image plane. Waypoint guidance and control system is designed using feedback linearization and linear quadratic tracker. Finally, numerical simulations are performed to validate the performance of the proposed collision avoidance scheme.
Conference Paper
This paper investigates the application of optimal control to the experimental flight of a Single Coaxial Rotor (SCR) Unmanned Aerial Vehicle (UAV) conducted in the indoor flight facility of the National Defense Academy (NDA) of Japan. The optimal control problem is prescribed as a minimum-length obstacle avoidance maneuver of the SCR UAV and it is solved using pseudospectral (PS) optimal control theory. The optimal trajectory is computed offline as a kinematic path-planning problem and then provided to the real UAV system as reference input commands. While only preliminary studies have been conducted at NDA, the results provide nominal tracking performance and validate the feasibility of the approach.
Article
We report recent results from field experiments conducted with a team of ground and aerial robots engaged in the collaborative mapping of an earthquake-damaged building. The goal of the experimental exercise is the generation of three-dimensional maps that capture the layout of a multifloor environment. The experiments took place in the top three floors of a structurally compromised building at Tohoku University in Sendai, Japan that was damaged during the 2011 Tohoku earthquake. We provide details of the approach to the collaborative mapping and report results from the experiments in the form of maps generated by the individual robots and as a team. We conclude by discussing observations from the experiments and future research topics. © 2012 Wiley Periodicals, Inc.
Conference Paper
Operating micro aerial vehicles (MAVs) outside of the bounds of a rigidly controlled lab environment, specifically one that is unstructured and contains unknown obstacles, poses a number of challenges. One of these challenges is that of quickly determining an optimal (or nearly so) path from the MAVs current position to a designated goal state. Past work in this area using full-size unmanned aerial vehicles (UAVs) has predominantly been performed in benign environments. However, due to their small size, MAVs are capable of operating in indoor environments which are more cluttered. This requires planners to account for the vehicle heading in addition to its spatial position in order to successfully navigate. In addition, due to the short flight times of MAVs along with the inherent hazards of operating in close proximity to obstacles, we desire the trajectories to be as cost-optimal as possible. Our approach uses an anytime planner based on A* that performs a graph search on a four-dimensional (4-D) (x,y,z, heading) lattice. This allows for the generation of close-to-optimal trajectories based on a set of precomputed motion primitives along with the capability to provide trajectories in real-time allowing for on-the-fly re-planning as new sensor data is received. We also account for arbitrary vehicle shapes, permitting the use of a non-circular footprint during the planning process. By not using the overly conservative circumscribed circle for collision checking, we are capable of successfully finding optimal paths through cluttered environments including those with narrow hallways. Analytically, we show that our planner provides bounds on the sub-optimality of the solution it finds. Experimentally, we show that the planner can operate in real-time in both a simulated and real-world cluttered environments.
Conference Paper
This paper presents a 3D online path planning algorithm for Unmanned Aerial Vehicles (UAVs) equipped with limited range sensors and computational resources in unknown cluttered environments. Even though quadrotor UAVs are considered to be a promising technology for surveillance purposes in indoor environments and for close observation in outdoor urban areas, it is very difficult to achieve autonomous aerial navigation toward a goal avoiding unpredicted collisions. Furthermore, greater attention and effort should be aimed at improving the computational efficiency and performance of path planning algorithms. The proposed heuristic algorithm offers on-the-fly path findings with a lesser computational complexity. We demonstrate the efficiency of our algorithm in a real world scenario implemented using the V-REP simulator.
Conference Paper
This paper describes the system architecture and core algorithms for a quadrotor helicopter that uses vision data to navigate an unknown, indoor, GPS-denied environment. Without external sensing, an estimation system that relies only on integrating inertial data will have rapidly drifting position estimates. Micro aerial vehicles (MAVs) are stringently weight-constrained, leaving little margin for additional sensors beyond the mission payload. The approach taken in this paper is to introduce an architecture that exploits a common mission payload, namely a video camera, as a dual-use sensor to aid in navigation. Several core algorithms, including a fast environment mapper and a novel heuristic for obstacle avoidance, are also presented. Finally, drift-free hover and obstacle avoidance flight tests in a controlled environment are presented and analyzed.