ArticlePDF Available

Abstract and Figures

The context of this work is vision-based motion planning for humanoid robots in an unknown environment. We present an efficient combination of on-line D environment modeling and motion planning methods for humanoid robots (e.g whole body motion and walking ) in non-static environment. To construct the model of the environment, we rely on 3D occupancy grid that is updated incrementally by stereo vision. As the dimension of configuration space is high for humanoid robots, a roadmap based method is used for motion planning. However, as the environment is not static, it is necessary to update the roadmap after receiving new visual information. In other words, the nodes and edges which are in collision, based on the new update of the environment, must be erased. Moreover, preliminary steps are necessary for considering the environment as a non-static model. As we construct the model incrementally by vision, several thousands points would appear in each update of the model. Therefore, updating the roadmap raises algorithmic complexity issues. Our approach is an extension of a recent idea to cope with the problem. After presenting the approach, we implement our method by planning a collision-free motion in a 3D occupancy grid model generated by HRP2 based on stereo vision. As the robot navigates in the environment, it receives updated information through its on-board cameras and refreshes the D model of the environment incrementally. Conventionally, the D model can be composed of up to millions of voxel. If the statuses of some voxels change, our method uses these changes to update the last roadmap locally. This updated roadmap is then reused for further motion planning. We evaluate our algorithm by measuring processing time and memory usage in each step and compare them with its descendant.
Content may be subject to copyright.
Motion planning for humanoid robots in
environments modeled by vision
Alireza Nakhaei, Florent Lamiraux
CNRS-LAAS, University of Toulouse, France anakhaei@laas.f r, florent@laas.fr
Abstract—The context of this work is vision-based motion
planning for humanoid robots in an unknown environment.
We present an efficient combination of on-line 3D environment
modeling and motion planning methods for humanoid robots (e.g
whole body motion and walking ) in non-static environment.
To construct the model of the environment, we rely on 3D
occupancy grid that is updated incrementally by stereo vision. As
the dimension of configuration space is high for humanoid robots,
a roadmap based method is used for motion planning. However,
as the environment is not static, it is necessary to update the
roadmap after receiving new visual information. In other words,
the nodes and edges which are in collision, based on the new
update of the environment, must be erased.
Moreover, preliminary steps are necessary for considering the
environment as a non-static model. As we construct the model
incrementally by vision, several thousands points would appear in
each update of the model. Therefore, updating the roadmap raises
algorithmic complexity issues. Our approach is an extension of
a recent idea to cope with the problem.
After presenting the approach, we implement our method by
planning a collision-free motion in a 3D occupancy grid model
generated by HRP2 based on stereo vision. As the robot navigates
in the environment, it receives updated information through its
on-board cameras and refreshes the 3D model of the environment
incrementally. Conventionally, the 3D model can be composed of
up to millions of voxel. If the statuses of some voxels change,
our method uses these changes to update the last roadmap
locally. This updated roadmap is then reused for further motion
planning.
We evaluate our algorithm by measuring processing time
and memory usage in each step and compare them with its
descendant.
I. INTRODUCTION
In this paper, we present a vision-based motion planning
strategy for a humanoid robot in an unknown environment.
We have implemented an integrated stereo vision environment
modeling and our new method for motion planning in non-
static environments.
The goal is to enable a humanoid robot to integrate on-line
environment modeling and motion planning in order to explore
and interact with a changing environment.
A. Motivation
For increasing the application of robotics in research and
industry, we need to develop more autonomous robotic sys-
tems which are able to sense and plan in real environments.
For that, the robot must be able to interact with the world
both in modeling the environment and in planning collision
free motions in the model. Working in a real environment will
make it essential to consider all changes in the workspace.
However, motion planning in such changing environments
becomes a much more challenging task. This gives rise to
the need for an efficient algorithm capable of coping with
changing environments.
B. Related works
a) Environment Modeling: Robotic mapping has been an
active area in AI for a few decades. It addresses the problem of
acquiring a spatial model of the workspace through available
sensors on a robot. A significant amount of early works on
mapping has been done in the past decades. We can distinguish
two main approaches addressing slightly different problems:
1) sparse 3D mapping aims at building a geometrically
coherent sparse map of characteristic features and at
localizing the robot in this map. Recent work in this
area include [11], [3].
2) dense SLAM aims at building a dense map of the
environment integrating obstacles. These techniques are
usually based on occupancy-grids. Pioneering work is
described in [14]. Later work define a occupancy grid
in a probabilistic framework [16], [2].
For more references and details about SLAM, we refer the
reader to the following book [17]. As we need to plan motion
in the model of the environment perceived by the robot,
occupancy grid techniques better fit our requirements.
b) Motion planning in changing environments: A com-
mon task in robotics is to plan a collision free motion between
initial and final configuration in an environment. Several ap-
proaches exist for solving such a common problem of motion
planning. One of the popular techniques to solve this problem
is to plan an optimal trajectory using Dijkstra algorithm or
A* [15] through a uniformly discretized configuration space.
However, this approach is impractical for more complex
configuration spaces, such as humanoid robots, in terms of
both memory and computation time. Therefore, approaches
such as Rapidly exploring Random Trees (RRTs) and Prob-
abilistic Roadmaps (PRMs) have been widely-used in these
domains [10], [8]. These approaches work by randomly sam-
pling points in the continuous configuration space. The initial
and final configurations are then connected via these samples.
Many variants have been proposed in the literature. The best
strategy usually depends on the type of problem and of the
type of environment considered. [9] provide a comprehensive
description of many methods for planning paths in a static
environment.
To deal with changing environments, some approaches have
been implemented in order to optimize the process of updating
precomputed roadmaps at run-time [18], [13], [12]. The ap-
proach we propose in this paper to plan motions in changing
environment is an extension of an existing approach using a
mapping between workspace and configuration space [12].
In [12], Leven and Hutchinson begin by constructing a
graph (roadmap) that represents the configuration space. In
this first step, obstacles are not taken into account, only auto-
collisions are considered. In a second step, they generate
a representation that encodes a mapping from cells in the
discretized workspace to nodes and edges of the roadmap. In
the on-line planning stage, the planner first identifies the cells
in the workspace that corresponds to obstacles and uses the
encoding mapping to delete the corresponding nodes and edges
from the roadmap. The remaining roadmap is used for the next
motion planning step. This approach is a good step towards
real-time motion planning in changing environments and copes
with the associated problems efficiently. It was successfully
applied to a robotic arm with 6 degrees of freedom.
However, applying this method to a humanoid robot in a
much bigger workspace is not reasonable since the required
memory grows quickly with the size of the workspace. In the
approach we describe in this paper, we use the same basic idea
of work space cell-decomposition, but we use cells of bigger
size and we test again for collision the roadmap features lying
in cells where obstacles have appeared.
C. Contribution
In this paper, we are interested in motion planning in
environments where the position of obstacles is unknown. To
construct the 3D model of the workspace, we create a 3D-
occupancy grid map which is updated based on stereo-vision
information. As the robot navigates in the environment, it
successively takes stereo images and updates the 3D model.
Creating a 3D model instead of 2D is essential for the
humanoid robot to plan whole body motions.
The contribution of our approach is a new outlook to cell
decomposing workspaces. In Leven and Hutchinson approach,
a cell is the smallest volume of environment which can be con-
sidered as free or occupied. In order to have a good resolution
of the workspace these cells should be small enough.
However, in our approach, we define a cell as a larger
volume of environment composed of many small voxels. These
voxels are the smallest identifiable volume of the environment.
In case of changes in the environment, stereo vision provides
information about the modified voxels . Then, cells which
include modified voxels are detected and all nodes and edges
associated with these cells are validated again. In case of fail-
ure in validating, the relevant nodes and edges are erased from
the roadmap. As the environment is explored and modified, the
roadmap grows incrementally.
To our knowledge, this work is one of the first to integrate
3D environment modeling and roadmap-based motion plan-
ning together.
The paper is organized as follows. In the following section
we describe the method to model the environment. Section III
presents our motion planning algorithm and our contribution.
Finally, Section IV describes our experimental setup and the
results of the tests that was conducted in a model of a real
environment.
II. ENVIRONMENT MODELING
A 3D occupancy grid map represents the environment in
our approach. The occupancy grid representation employs a
multidimensional tessellation of workspace into voxels where
each voxels stores a probabilistic estimate of its state. In fact,
environment is discretized to uniform cubes which are the
smallest volume of our 3D model. The size of the cubes is
selectable based on our desired resolution in the model. Based
on the volume of the 3D model of environment, the model
would be composed of up to millions voxels.
W =
n
[
i=1
V
i
, i, j [1, n], V
i
V
j
= (1)
where W is the workspace and V
i
presents voxel i in the 3D
model. The state variable associated with a voxel V is defined
as a discrete random variable with three states unknown,
occupied and free. Moreover, for determining the state of each
, 3 probabilities are assigned to each one. P [s(V
i
) = O] ,
P [s(V
i
) = F ] , P [s(V
i
) = U] are the probabilities of voxel
i to be occupied, free and unknown respectively. The robot
obtains information about its environment through its cameras
and uses stereo vision method to get a range value r for a 3D
point in the environment. A stochastic sensor model defined
by a probability density function p(r|z) is used to relate the
reading r to a true parameter space range value z.
Based on the recent probability density, P [s(V
i
) = O] ,
P [s(V
i
) = F ] , P [s(V
i
) = U] is modeled for each voxel.
These probabilities are stored in G
i
and will be updated as
the robot gets the new information about the environment.
For certain application such as motion planning, it is neces-
sary to assign a specific state to each voxel of the occupancy
map. An optimal estimate of the state of a voxel is given by
an extension of the maximum posteriori decision rule in [1]:
Voxel i will be considered as:
- Occupied if P [s(V
i
) = O] > P [s(V
i
) = U] and
P [s(V
i
) = O] > P [s(V
i
) = F ],
- Free if P [s(V
i
) = F ] > P [s(V
i
) = U ] and P [s(V
i
) = F ]
> P [s(V
i
) = O],
- Unknown if P [s(V
i
) = U ] > P [s(V
i
) = O] and
P [s(V
i
) = U ] > P [s(V
i
) = F ].
For initializing the environment, as no information is avail-
able for the model, all voxels are considered as unknown and
the probabilities are set as follows:
i {1, .., n}
P
i
(s(G
i
) = O) = 1%,
P
i
(s(G
i
) = F ) = 1%,
P
i
(s(G
i
) = U ) = 98%,
(2)
Fig. 1. Estimate the probabilities for each voxel in occupancy model from
sensor data.
A. Updating 3D model
As it was explained, a sensor model is used to relate the
reading r to a true parameter space range value z and to
obtain the concerned probabilities for each voxels. In fact, in
this approach, the cameras is modeled with simple Gaussian
uncertainties in depth as follow:
p(r
j
|z) =
1
2π
exp(
(r
j
z)
2σ
2
) (3)
where j is the index of each 3D point of the stereo-image.
Figure 1 illustrates a simple model which is used to obtain
the probabilities.
To allow the incremental composition of sensory informa-
tion, the sequential updating formulation of Bayes’ theorem
is used to determine the voxel probabilities [4]. Moreover,
using these algorithms will keep the dynamic characteristic
of the model of environment. In fact, in case of having an
obstacle which enters in the model or exits from the model,
the probabilities of the concerned voxels will change rapidly
and their states will be modified. Extending the approach of [5]
to the model, and considering the current probabilities of G
i
as
P
i
(s(G
i
) = O | r
t
), P
i
(s(G
i
) = U | r
t
),P
i
(s(G
i
) = F | r
t
)
based on observations {r}
t
= {r
1
, r
2
, ..., r
t
} and given a new
observation as {r}
t+1
, the updated estimate will be given as
follows:
P [s(G
i
) = O | {r}
t+1
] =
P [r
t+1
| s(G
i
) = O].P [s(G
i
) = O | {r}
t
]
X
s(G
i
)
P [r
t+1
| s(G
i
)].P [s(G
i
) | {r}
t
]
(4)
At each step of updating the model, the current probabilities
are obtained from the occupancy grid map and at the end
of calculations, results are stored in the model. These stored
probabilities are used as the initial probabilities of next update.
So, as the robot navigates in the workspace, it receives new
information and generate the 3D model of the environment
incrementally. This approach generates a non-static 3D model.
Therefore, obstacles may appear or disappear in the model.
Figure 2 illustrates an environment which is modeled by
stereo vision. It should be mentioned that localization plays an
important role in environment modeling. In fact, as the data
obtained form sensors are read in the sensor coordinate frame,
Fig. 2. The robotics lab in LAAS-CNRS which is modeled based on this
approach. Red and green voxels (cube) represent occupied and unknown
volume respectively in the model. The work space is modeled with 3D voxels
of 5cm×5cm×5cm. Notice that this model is the result of several stereo-
images taken from different points.
Fig. 3. The HRP2 humanoid robot.
localization of sensors in the model is important for having a
precise model.
At the following step, the generated model of environment
is used for motion planning. The robot is allowed to move in
the free workspace and incrementally improve the occupancy
grid map and explore the unknown area.
III. PATH PLANNING
Most of the real life environments are non-static and for path
planning in such environments, the key issue is how to deal
with the changes in such models. As mentioned earlier, our
approach of motion planning in changing environment is an
extension of an existing method that uses a mapping between
a cell-decomposed workspace and configuration space [12].
A. Constructing roadmap
Having the instant model of environment, we begin with cre-
ating a roadmap in the environment. Our planner uses all the
internal constraints such as degree of freedom and kinematic
limitations to create roadmap in the free zone of the 3D model.
Nodes are generated by shooting random configuration in the
C-space and acceptable nodes are connected to the roadmap by
acceptable edge. We denote the roadmap as R(N, E) where N
and E are the list of nodes and edges respectively. However, as
the environment is not static, the free zone would be modified
as the robot received new information. So some nodes in N
and some edges in E become invalid in the next instance of
the environment model. Validating all nodes and edges with
new features would be time-consuming. Our approach copes
with this problem to prevent validating all nodes and edges.
B. Updating roadmap
As it was mentioned in the previous section, to prevent
erasing and regenerating roadmap, it is necessary to update the
roadmap after receiving new information from environment.
The preliminary solution is validating all nodes and edges of
the roadmap with the updated model . The nodes and edges
which are in collision with the new model can be erased
and the rest parts of roadmap can be used for planning a
motion in the updated environment. Considering a cost for
validating, it is inefficient to validate all the nodes and edges
in a large environment because of a simple modification in
the workspace. For example, consider a humanoid robot in a
large environment composed of millions of voxels. Suppose
that the status of one voxel changes from free to occupied. It
is inefficient in term of time and cost to validate all nodes and
edges of an existing roadmap. To prevent validating all nodes
and edges, Leven, and Hutchinson [12] proposed a mapping
between the workspace and configuration space. Based on this
approach, work space is decomposed to uniform small cells
and each node and edge are associated to one or various cells.
However, cells are the smallest volumes of environment which
would have the state of occupied or free. In case of changing
the status of a cell from free to occupied, the associated nodes
and edges are erased and the planner will use the rest of
roadmap for future planning.
Our approach is an extension of this idea to cope with
the problem. However, the important different is that in our
approach, cells are not the smallest element of the environment
and with this method, we would save a high amount of
memory in dealing with the problem. Also, instead of erasing
nodes and edges, our planner validates the nodes and edges
concerned by the modified cells.
C. Work space cell-decomposition
A workspace cell-decomposition method is used in our
approach. However, as opposed to [12], cells are not the
smallest part of the environment. In fact, each cell is composed
of up to thousand voxels and these voxels are the smallest
element of the occupancy model of the environment. The
concept of voxel is explained in section II.
W =
m
[
i=1
C
i
, i, j [1, m], C
i
C
j
= (5)
C
i
presents a cell in the model and W is the workspace. In
other hand,
C
i
=
q
[
i=p
V
i
, i, j [p, g], V
i
V
j
= (6)
Fig. 4. Motion planning in 3D model of robotics lab in LAAS-CNRS.
The size of workspace is 17m ×12.5m×1.7m. The workspace is modeled
with 10cm×10cm×10cm 3D Voxels. Also, workspace is decomposed to the
1.5m×1.5×1.5m Cells to implement the approach.
It should be mentioned that in this approach, cell-
decomposition is not necessarily uniform in the workspace.
Therefore, based on any preliminary information about the
environment, the shape and size of cells would be different in
a specific region to optimize the number of calculations.
Moreover, as in [12], a mapping between configuration
space C and work space W is built to associate any configu-
ration to a list of cells.
φ(q) = {C
i
| A(q) C
i
6= ∅} (7)
where q is configuration and A(q) is a volume of environment
which is occupied by robot in q. In fact, thanks to this
mapping, each q is associated to a list of cells which are in
collision with the robot at this configuration. By using (7), a
list of cells can be associated to any nodes and Edges.
On other hand, based on this approaches, a list of nodes
and edges will be assigned to each cell. Therefore:
C
i
= {L
N
i
, L
E
i
} (8)
where, L
N
i
is list of nodes which the robot in their config-
uration will be in collision with C
i
. Also, L
E
i
is list of edges
which the robot is in collision with C
i
if it tracks that edges.
D. Updating Cell
L
N
i
and L
E
i
are updated as the planner solves the problem
and finds a path between initial and final configurations. To
optimize the calculation time, cell lists are updated during
execution of the generated path. Therefore, by using the
updated roadmap, the concerned nodes and edges should be
added to the associated lists of each cells C
i
.
For updating L
N
i
with a new node, the robot will be
considered at the concerned q and the node will be added to
L
N
of cells which are in collision with robot. Updating the L
E
is more complicated and time consuming. In fact, it concerned
Fig. 5. Illustration of roadmap management during updates of workspace. Red and green voxels represents occupied and unknown zone respectively: a) A
roadmap is illustrated in the top view of a 3D model of an environment which is generated by vision. b) The model is updated based on new information
from camera. c) The roadmap is modified based on the updates in the 3D model by using our cell decomposition algorithm. Some nodes and edges are erased
from the roadmap during the modifications. d) The modified roadmap is used for a motion planning process. Some nodes and edges are added to the roadmap.
to a mapping between an edge and a set of cells which are in
collision with robot if the robot sweeps the concerned path.
To update the L
E
i
, the same algorithm as in [12] was used.
Figure 4 illustrates a 3D model which is decomposed to
1.5m cell to implement the approach.
IV. EXPERIMENTAL RESULTS
A. Exploring unknown environment
We experimented our approach in a real environment with
HRP2. The robot is supposed to navigate in a 3D occupancy
grid model of environment generated by vision. As mentioned
in section II , localization is important for generating a precise
model of environment. Localization errors become a critical
issue when generating models of a large environment. So, a
small volume of environment is allocated for exploring by
HRP2.
The model of the environment is generated incrementally.
At each step of updating the model, HRP2 is required to
look around and take various photos. In generating a whole
body motion for turning the head around and taking photos,
a generalized inverse kinematic (IK) model was used as
proposed in [19], [20]. Based on this method, such tasks
as gaze control are treated and whole-body joint angles are
computed to execute the motion. Meanwhile, several criteria
such as manipulability, stability or joint limits are monitored
and finally, the motion is executed. This method enables the
robot to look around and take various photos. Then, the robot
uses this photos to update the model of environments.
After updating the model, the robot is supposed to plan
a path towards a predetermined position in the environment.
As it was explained in section II , the model is composed of
3 types of voxels: free, occupied and unknown. Our planner
consider the occupied voxels as obstacles and plan a path
between initial and predetermined goal. Probably, a part of
the computed path has intersection with the unknown voxels.
So, the next step is computing the last configuration of the
path which is still in the free zone of the model. To compute
this configuration, in this step, the unknown are considered
obstacle as well. Then, the path is discretized to successive
configuration and the last valid configuration is selected. The
robot follows the path to this configuration and again takes
various photos to update the map and explore the remaining
unknown zone to reach the predetermined goal.
We have 2 steps in planning a motion and navigating in
the model. In the first step, our approach is applied to a
motion planner of the bounding box of the humanoid robot
as proposed in [20] and generates a path for the bonding
box. In fact, this planner generate a path for a car-like
Fig. 6. Changes in the environment. HRP2 uses the cell decomposition method to validate some of the nodes and edges in the existing roadmap to for planning
next motions
robot. In the second step, the planned collision-free path is
transformed into dynamic humanoid locomotion. After the
path is converted into footsteps, a walking pattern generator is
applied to generate a dynamically stable walking motion using
a method proposed by Kajita et al. [7] based on a preview
controller for zero moment point (ZMP).
Based on this algorithm, HRP2 explores an unknown en-
vironment to reach a predetermined location. This location
can be the position of an object which have specific color.
Figure 7 illustrates the evolution of a 3D model of an unknown
environment along with planning motion in the model.
B. Motion planning in changing environments
In order to validate our approach in motion planning in
changing environment, we needed to build a large model
of environment. However, we do not have yet a precise
localization module on HRP2 robot. We therefore built the
map using another mobile robot Jido
1
equipped with an
accurate localization module. HRP2 then planned motions in
this model.
Jido is a mobile manipulator which is equipped with a
stereo camera and also a laser scanner based localization
module. Jido was used in our experiments for taking several
photos. These photos were used to generate the 3D occupancy
model of the environment. The robotic lab of LAAS-CNRS
is selected for conducting the tests as a large environment.
1
http://www.laas.fr/robots/jido/data/en/jido.php
2 models of the lab were generated. The main difference
between these 2 models is a table at the middle of lab which
was displaced. Displacement of this table caused a part of
free environment to become occupied and also, an occupied
volume of environment to become free. Although the only
difference in the 2 states of environment is the displacement
of the table, there are much more differences between the two
3D models. In fact, based on any error of localization, some
parts of model were modified. There are 7848 occupied voxel
in the first model. There are 1810 voxels which have been free
or unknown in the first model and they became occupied in
the second model.
Figure 6 illustrates the 2 generated models which were
used to test our approach in changing environments. Although
environment modeling was not done on-line with HRP2, the
approach is valid and it does not disturb the efficiency of the
approach.
Moreover, instead of whole body motion planning in the
changing environment, we used a path planning algorithm for
the HRP2 bounding box as explained in the last section .
Although we simplified the problem of motion planning from a
robot with with 40 degree of freedom to a box with 3 degree
of freedom, the approach and the results are valid for any
roadmap based planner.
The allocated size of the environment in these models is
17m × 12.5m × 1.7m. The size of each voxel in the these 3D
occupancy grid map was 10cm×10cm ×10cm. It is simply
obtained that the model is composed of 361250 voxels.
Fig. 7. Exploring an unknown environment incrementally. Red and green
voxels represent occupied and unknown volume respectively in the model.
The robot plans a path in non-occupiedzone and then follows the generated
path to the boundary of unknown zone.
TABLE I
TIME PERFORMANCE.
Solving Updating Solving
Test problem 1 Roadmap problem 2
[ms] [ms] [ms]
A 399563 0 314452
B 399563 5447 18716
C 399563 1936 18716
TABLE II
MEMORY PERFORMANCE.
Cell size Number of voxel Memory
[cm] in each cell [kb]
10 1 264
50 125 132
100 1000 44
170 4913 32
We conducted 2 tests in the generated model to illustrate the
efficiency of the our planner in terms of time and memory.
1) Time efficiency: In this scenario, we implement 3 types
of tests to evaluate the efficiency of our algorithm in terms
of time. The goal is measuring the time of planning motion
between an initial and goal configurations in the 2 models.
In test A, we did not use our algorithm and the roadmap
was erased after planning motion in the first environment. So,
the time of planning a motion between 2 configurations was
measured separately in the 2 models. In test B, we kept the
roadmap after solving the motion planning problem in first
model. However, after loading the second model, we validated
all the nodes and edges of the roadmap with the new obstacles
in the second model.
Finally, in test C, we used our approach to optimize the time
of calculation. We used our approach and the planner validated
locally just some of the nodes and edges of the roadmap. The
size of cells in the last types of tests was set to 1.7m × 1.7m ×
1.7m and clearly each cell is composed of 4913 voxels. Table I
compares the average of calculation time in each stage of the
conducted tests.
2) Memory efficiency: To illustrate the efficiency of our
approach regarding the size of cells and memory usage, we
conducted 4 types of tests for solving a same problem in the
generated models. Exactly the same as last section, the first
model of the environment was loaded in the planner. Then, the
planner generated a roadmap and found a path between initial
and final configurations. The generated roadmap was kept and
modified after loading the second model of environment based
on our approach. In the first types of tests, we used the smallest
possible size for each cells. In this type of tests, the size of
each cell was 0.1m × 0.1m × 0.1m and each cells is composed
of just one voxel. We increased the size of cells in second, third
and fourth types. Ultimately, in the fourth types of tests, the
size of each cells was 1.7m × 1.7m × 1.7m and each cells
composed of 4913 voxels. Table II compares the effect of cell
size on memory usage in our approach.
Based on the results, it is clear that by using a bigger cells,
Fig. 8. HRP2 plans a whole body collision free motion to grasp an object in a generates 3D model. An occupancy model of a table which is generated by
vision is used in motion planning.
we can save memory usage. Especially, in case of working
with a robot with a high degree of freedom, cell size plays an
important roll in memory usage. On other hand, as the size
of cells increases, the number of nodes and edges in each cell
increased. So, the planner consumes more time to validates
the existing components(nodes and edges) of a modified cell.
V. CONCLUSIONS
We have presented an approach to deal with motion plan-
ning in unknown environments. In stage of environment mod-
eling, we generated a 3D occupancy grid model by using stereo
vision data. For motion planning in the changing environment,
we extended an existing model and we improved its limitations
in terms of memory. On the other hand, combining motion
planing problems with a standard occupancy grid modeling
of environment by stereo vision enable us to conduct an
experiment in a real model of environment.
Moreover, the results illustrate the robustness of the ap-
proach in interacting with a large environment with thousands
of voxels in term of modeling as well as updating roadmap
based on our cell decomposition.
We believe that these approaches in terms of integrating the
modeling and motion planning would be a step toward whole
body path planning in a real environment for a humanoid
robot.
As the perspective, to continue our work, we are going
to couple our technique with path planning algorithms that
produce whole-body motions such as grasping objects in 3D
environment. As a primary step, the generated 3D model is
coupled with a recent algorithms of local collision avoid-
ance [6]. This integration enable HRP2 to plan a whole body
motion in the generated 3D model of the environment. Figure 8
illustrates HRP2 who plan a collision free motion in the
generated 3D model of environment to grasp an object on
a table.
Furthermore, we are going to combine our algorithms with
visual SLAM in order to make HRP2 able to explore large
environments by itself.
REFERENCES
[1] A. Bryson and Y. Ho. Applied Optimal Control. Waltham, 1969.
[2] D. Cole and P. Newman. Using laser range data for 3d slam in
outdoor environments. In IEEE International Conference on Robotics
and Automation, pages 1556–1563, Florida, May 2006.
[3] A. Davison, I. Reid, N. Molton, and O. Stasse. Monoslam: Real-time
single camera slam. IEEE Transaction an pattern analysis and machine
intelligence, 29(6), June 2007.
[4] A. Elfes. Occupancy Grids: A Probabilistic Framework for Robot
Perception and Navigation. PhD thesis, Carnegie Mellon University,
1989.
[5] A. Elfes. Using occupancy grids for mobile robot perception and
navigation. Computer, 22(6):46–57, 1989.
[6] F.Kanehiro, F.Lamiraux, O.Kanoun, E.Yoshida, and J.-P. Laumond. A
Local Collision Avoidance Method for Non-strictly Convex Objects. In
Proceedings of 2008 Robotics: Science and Systems(submitted), 2008.
[7] S. Kajita, F. Kanehiro, K. Kaneko, K. Fujiwara, K. Harada, K. Yokoi, and
H. Hirukawa. Biped walking pattern generation by using preview control
of zero-moment point. Robotics and Automation, 2003. Proceedings.
ICRA’03. IEEE International Conference on, 2, 2003.
[8] L. Kavraki, P. Svestka, J. Latombe, and M. Overmars. Probabilistic
roadmaps for path planning in high-dimensionalconfiguration spaces.
Robotics and Automation, IEEE Transactions on, 12(4):566–580, 1996.
[9] S. LaValle. Planning Algorithms. Cambridge University Press, 2006.
[10] S. LaValle and J. Kuffner Jr. Randomized Kinodynamic Planning. The
International Journal of Robotics Research, 20(5):378, 2001.
[11] T. Lemaire, S. Lacroix, and J. Solá. A practical 3d bearing-only slam
algorithm. Edmonton, Canada, Aug. 2005. IEEE/RSJ.
[12] P. Leven and S. Hutchinson. Toward Real-Time Path Planning in
Changing Environments. Algorithmic and Computational Robotics: New
Directions: the Fourth Workshop on the Algorithmic Foundations of
Robotics, 2001.
[13] H. Liu, X. Deng, H. Zha, and D. Ding. A Path Planner in Changing
Environments by Using WC Nodes Mapping Coupled with Lazy Edges
Evaluation. Intelligent Robots and Systems, 2006 IEEE/RSJ Interna-
tional Conference on, pages 4078–4083, 2006.
[14] H. Moravec and A. Elfes. High resolution maps from wide angle
sonar. Robotics and Automation. Proceedings. 1985 IEEE International
Conference on, 2, 1985.
[15] N. Nilsson. Principles of Artificial Intelligence. Springer, 1982.
[16] S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A. Cremers, F. Dellaert,
D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz.
Probabilistic algorithms and the interactive museum tour-guide robot
minerva. IJRR, 19(11):pp 972–999, 2000.
[17] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press,
2005.
[18] J. van den Berg, D. Ferguson, and J. Kuffner. Anytime path planning and
replanning in dynamic environments. Robotics and Automation, 2006.
ICRA 2006. Proceedings 2006 IEEE International Conference on, pages
2366–2371, 2006.
[19] E. Yoshida, O. Kanoun, C. Esteves, and J. Laumond. Task-driven
Support Polygon Reshaping for Humanoids. Humanoid Robots, 2006
6th IEEE-RAS International Conference on, pages 208–213, 2006.
[20] E. Yoshida, A. Mallet, F. L. O. Kanoun, O. Stasse, M. Poirier, P. F.
Dominey, J.-P. Laumond, and K. Yokoi. Give me the purple ball he
said to hrp-2 n.14. In IEEE RAS/RSJ Conference on Humanoids Robots,
Pittsburg, USA, 30 Nov. - 2 Dec., page Oral presentation, 2007.
... In [15], instead, a robot bounding box is used to check collisions within a 3D occupancy grid. A more recent work, [16], proposes an approach integrating mapping, planning and localization. ...
... In fact, taking into account the 3D structure of both the environment and the robot, the LMP is able to produce motions allowing the robot to pass, e.g., in the strict free space between the sofa and the chair (third snapshot). Note that, due to the complex shape of the chair, approaches using bounding boxes (e.g., [15]) or 2D projections of swept volumes (e.g., [17]) to check collisions would fail to find a feasible plan in this case. ...
... They used a roadmap-based method for motion planning because the dimension of configuration space is high. This algorithm was implemented on HRP2 [34]. Their method is not efficient because it needs exact stereo vision and a lot of time to find a path in each step. ...
Article
Full-text available
This paper discusses the real-time optimal path planning of autonomous humanoid robots in unknown environments regarding the absence and presence of the danger space. The danger is defined as an environment which is not an obstacle nor free space and robot are permitted to cross when no free space options are available. In other words, the danger can be defined as the potentially risky areas of the map. For example, mud pits in a wooded area and greasy floor in a factory can be considered as a danger. The synthetic potential field, linguistic method, and Markov decision processes are methods which have been reviewed for path planning in a free-danger unknown environment. The modified Markov decision processes based on the Takagi–Sugeno fuzzy inference system is implemented to reach the target in the presence and absence of the danger space. In the proposed method, the reward function has been calculated without the exact estimation of the distance and shape of the obstacles. Unlike other existing path planning algorithms, the proposed methods can work with noisy data. Additionally, the entire motion planning procedure is fully autonomous. This feature makes the robot able to work in a real situation. The discussed methods ensure the collision avoidance and convergence to the target in an optimal and safe path. An Aldebaran humanoid robot, NAO H25, has been selected to verify the presented methods. The proposed methods require only vision data which can be obtained by only one camera. The experimental results demonstrate the efficiency of the proposed methods.
... In addition, Nakhaei and Lamiraux (2008) used online 3D mapping and combined it with path planning. They used 3D occupancy grid that was updated incrementally by stereo vision for constructing the model of the environment. ...
Article
Full-text available
Path planning in a completely known environment has been experienced various ways. However, in real world, most humanoid robots work in unknown environments. Robots' path planning by artificial potential field and fuzzy artificial potential field methods are very popular in the field of robotics navigation. However, by default humanoid robots lack range sensors; thus, traditional artificial potential field approaches needs to adopt themselves to these limitations. This paper investigates two different approaches for path planning of a humanoid robot in an unknown environment using fuzzy artificial potential (FAP) method. In the first approach, the direction of the moving robot is derived from fuzzified artificial potential field whereas in the second one, the direction of the robot is extracted from some linguistic rules that are inspired from artificial potential field. These two introduced trajectory design approaches are validated though some software and hardware in the loop simulations and the experimental results demonstrate the superiority of the proposed approaches in humanoid robot real-time trajectory planning problems.
... Furthermore, Nakhaei and Lamiraux (2008) utilized a combination of online 3D mapping and path planning. They utilized a 3D occupancy grid that is updated incrementally by stereo vision for constructing the model of the environment. ...
Article
Full-text available
In contrast to the case of known environments, path planning in unknown environments, mostly for humanoid robots, is yet to be opened for further development. This is mainly attributed to the fact that obtaining thorough sensory information about an unknown environment is not functionally or economically applicable. This study alleviates the latter problem by resorting to a novel approach through which the decision is made according to fuzzy Markov decision processes (FMDP), with regard to the pace. The experimental results show the efficiency of the proposed method.
... Once we get both with respect to the same frame of reference then we can easily manipulate the object with robot's hand. For an object to be manipulated in an unstructured environment visual servoing can be used to direct the robot to the object of interest [1]. ...
Conference Paper
Full-text available
Visual Servoing for a robotic hand is still a difficult problem to solve and is the topic of current research. To recognize an object of interest and calculating its orientation and location without extensive training is another far-fetched problem faced by researchers in this field. In this paper, object recognition based on scale-invariant feature based transform (SIFT) is done, which is used to calculate the location in image plane, scaling factor and orientation of the object in the testing image with respect to the training image of the object. Here the system is trained using single image instead of many images at different orientations. Further, the object location from 2D image coordinated is mapped to real world 3D coordinates where the third dimension is calculated using relative scaling obtained from SIF T. The i mages are obtained u sing a stationary monocular camera system. For this, we have used NAO which is autonomous, programmable humanoid robot developed by Aldebaran Robotics, France.
Article
Humanoid navigation has been the center of attraction among robotic researchers since many years by virtue of their increasing use in industrial automation and smart manufacturing. In the current research, a firefly based computer vision integrated navigational analysis has been performed on humanoid robots for smooth movement by negotiation with obstacles present in complicated terrains. Here, the logic of the firefly algorithm has been used to design the controller by careful consideration of the navigational parameters. A computer vision based method is integrated along with the developed controller to resolve some conflicting situations that may arise by encountering large sized obstacle or detection of an obstacle exactly in front of the robot where the robot becomes confused regarding the direction of turn. The developed navigational model has been tested in simulation environments by considering NAO robot as the humanoid platform. The simulation results are also verified through an experimental platform developed under research laboratory conditions. The data obtained from both the platforms are compared in terms of navigational parameters and are found to be in close proximity to each other. Finally, the developed navigational model is also authenticated against other existing models for validating better efficiency. The current investigation deals with a humanoid navigational model using firefly based approach integrated with a computer vision technique for smooth and collision‐free motion planning in complex terrains.
Conference Paper
We present repetition sampling, a new adaptive strategy for sampling based planning, which extracts information from previous solutions to focus the search for a similar task on relevant configuration space. We show how to generate distributions for repetition sampling by learning Gaussian Mixture Models from prior solutions. We present how to bias a sampling based planner with the learned distribution to generate new paths for similar tasks. We illustrate our method in a simple maze which explains the generation of the distribution and how repetition sampling can generalize over different environments. We show how to apply repetition sampling to similar constrained manipulation tasks and present our results including significant speedup in execution time when compared to uniform sampling.
Article
Full-text available
This paper proposes a local method to avoid collisions with continuous velocities. The main contribution of the method is that non- strictly convex polyhedra can be used as geometric models of the robot and the environment without any approximation. The problem of the continuous interaction generation between polyhedra is reduced to the continuous constraints generation between polygonal faces and the continuity of those constraints are managed by the combinatorics based on Voronoi regions of a face. A collision-free motion is obtained by solving an optimization problem defined by an objective function which describes a task and linear inequality constraints which do geometrical constraints to avoid collisions. The proposed method is examined using example cases of simple objects and also applied to a humanoid robot HRP-2.
Article
This paper presents the first randomized approach to kinodynamic planning (also known as trajectory planning or trajectory design). The task is to determine control inputs to drive a robot from an initial configuration and velocity to a goal configuration and velocity while obeying physically based dynamical models and avoiding obstacles in the robot's environment. The authors consider generic systems that express the nonlinear dynamics of a robot in terms of the robot's high-dimensional configuration space. Kinodynamic planning is treated as a motion-planning problem in a higher dimensional state space that has both first-order differential constraints and obstacle based global constraints. The state space serves the same role as the configuration space for basic path planning; however, standard randomized path-planning techniques do not directly apply to planning trajectories in the state space. The authors have developed a randomized planning approach that is particularly tailored to trajectory planni ng problems in high-dimensional state spaces. The basis for this approach is the construction of rapidly exploring random trees, which offer benefits that are similar to those obtained by successful randomized holonomic planning methods but apply to a much broader class of problems. Theoretical analysis of the algorithm is given. Experimental results are presented for an implementation that computes trajectories for hovercrafts and satellites in cluttered environments, resulting in state spaces of up to 12 dimensions.
Article
We present a new method for generating collision-free paths for robots operating in changing environments. Our approach is closely related to recent probabilistic roadmap approaches. These planners use preprocessing and query stages, and are aimed at planning many times in the same environment. In contrast, our preprocessing stage creates a representation of the configuration space that can be easily modified in real time to account for changes in the environment. As with previous approaches, we begin by constructing a graph that represents a roadmap in the configuration space, but we do not construct this graph for a specific workspace. Instead, we construct the graph for an obstacle-free workspace, and encode the mapping from workspace cells to nodes and arcs in the graph. When the environment changes, this mapping is used to make the appropriate modifications to the graph, and plans can be generated by searching the modified graph. After presenting the approach, we address a number of performance issues via extensive simulation results for robots with as many as twenty degrees of freedom. We evaluate memory requirements, preprocessing time, and the time to dynamically modify the graph and replan, all as a function of the number of degrees of freedom of the robot.
Book
Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. Written for computer scientists and engineers with interests in artificial intelligence, robotics, or control theory, this is the only book on this topic that tightly integrates a vast body of literature from several fields into a coherent source for teaching and reference in a wide variety of applications. Difficult mathematical material is explained through hundreds of examples and illustrations.
Conference Paper
This paper presents a path planner based on PRM framework for robots operating in changing environments, in which obstacles can move randomly and robots may change their original shapes, e.g. a robot manipulator grasps an object. W-C nodes mapping coupled with lazy edges evaluation is used to ensure a generated path containing only valid nodes and edges when constructed probabilistic roadmap becomes partially invalid in changing environments. Our method combines merits of DRM and Lazy PRM methods. W-C nodes mapping, which is constructed in pre-processing phase by mapping every basic cell in workspace to nodes of roadmap, is preserved to indicate invalid nodes of roadmap fast whenever obstacles move. W-C edges mapping, which is another mapping relationship of DRM method, is skipped since it is much more complicated and time-consuming for construction. The simplified mapping with smaller size can be recomputed or modified fast when robots change their original shapes. Instead, lazy edges evaluation is used to ensure all edges valid along a found path. Simulated experiments for a dual-manipulator system show that our method is efficient for path planning in changing environments
Conference Paper
Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima