Content uploaded by Jorge Miranda Dias
Author content
All content in this area was uploaded by Jorge Miranda Dias
Content may be subject to copyright.
Grid Based Navigation
For
Autonomous
Robots*
An Algorithm Based
on
the Integration
of
Vision and Sonar Data
A.
Silva
P.
Menezes,
J.
Dias
Departamento de Engenharia Electrot6cnica
Instituto de Sistemas e Rob6tica
Pinhal de Marrocos,
3030
Coimbra, Portugal
(arlindo, paulo, jorge}@isr.uc.pt
Abstract: In this article, visual data obtained
by
a
binocular active vision system
is
integrated,
together with ultrasonic range measurements,
in the development
of
an obstacle detection and
avoidance system based on a connectionist grid.
This grid based framework allows the integra-
tion
of
two different sensing modalities (vision
and sonar) in
a
way that incoming sensorial data
can be mutually enhanced and validated. Each
grid node maps a configuration in a discrete sub-
set
of
the robot’s configuration space. Detected
obstacles result in sets
of
restricted configura-
tions. Using tlie grid data, a potential field is
iteractively computed for each mapped configu-
ration. This computation is done in real-time,
during the robot’s motion,
so
the potential field
changes as new data is integrated into the grid.
We
also
propose the implementation
of
some
innovative sensing and control strategies, using
the integrated sensorial inforination.
I.
SENSING MODALITIES
Besides odometry, two main sensing modalities are used
in the process
of
environment mapping. Those are ul-
trasonic range data and information acquired through
binocular active vision.
A.
Ultrasonic Sensors Runge Measurements
The range measurements are obtained from a ring
of
24
ultrasonic sensors present in
our
mobile platforin. This
kind
of
sensors, being cheap and easy to use, frequently
present erroneous data, caused by multiple reflections
and sonar scattering. However, this information can
be validated, in one hand, through the accumulation
of
readings, and, in the other hand,
by
including other
modality
of
sensorial information [4].
For
the ultrasonic sensors
a
simple model is used: a
reading is represented by a cone where the arc points
have the
same
probability
of
belonging
to
an
obstacle,
being that probability inversely proportional to the dis-
tance measurement. A point inside the cone is consid-
‘This work was partially financed
by
NATO
Scientific Affairs
Division on the framework
of
the NATO Science
for
Stability
Program, PO-ROBOT project.
+Working
under
grant
PRAXIS XXI
BM/7255/95
from
JNICT- Junta Nacional de Investigacao Cientifica
e
Tecnologica
Fig.
1:
Mobile platform with stereoscopic
setup.
ered
to
have
a
probability
of
not belonging
to
an obsta-
cle and
t,his
probability is proportional
to
the distance
between the point and the cone’s arc. An illustration
of this principle is summarized in Fig.
2.
Using this technic we can speed up the update
of
this
grid-based representation with very good experimental
results and avoiding the usual gaussian representations
-
see Fig.
(3.
B.
Obstacle Detection
by
Binocular Disparity
Vision (Fig.
3)
is a powerful sensing modality which can
be used, not only in low level tasks, like the ones
of
ob-
stacle detection and avoidance presented here, but also
in higher level problems, e.g. object identification and
pursuit. In this work we are using stereoscopic vision as
a
mean
of
complement and va.lidate data obtained by
the ultrasonic sensors.
To
do this, we developed a sim-
ple algorithm for obstacle detection, using stereoscopic
information Its simplicity makes it useful
for
mobile
robotics, where the need for real time implementation
asks for algorithms with moderated computational de-
mands.
The solution found allows us
to
detect obstacles in
the field
of
view of the stereoscopic setup, both above
and below
the
robot’s
motion
plane. The algorithm
follows this simple principle:
if
we have a stereo pair
WEE
Catalog
Number:
97TH8280
-
802
-
ISJE’97
-
Guimarses, Portugal
v
=
reading
vufue
r
=
sonur
opening
Sonar
Fig.
2:
Model
for
an ultrasonic sensor reading.
of images
of
the ground plane, in front
of
the robot,
where
Z
is constant, and we can establish an affine
transformation which relates each point in one image,
with the correspondent point in the otjher image, for
any image pair with points not in the ground plane,
the transformation will fail. This means that, any ob-
stacle in the field
of
view, above
or
bellow the ground
plane, will result in a disparity region when the two im-
ages are matched using the transforination mentioned
above. When the robot is moving, any relevant dispar-
ities between image pairs are identified
as
an obstacle.
The mathematics underlying this principle are very
simple.
A
real world point with
(X,
Y,
Z)
coordinates
is related to its image representation, with coordinates
(U,
w),
by a
3
x
4
transformation matrix
as
expressed
in
1.
IEEE
Catalog
Number:
97TH8280
-
803
-
From
our
particular setup we have that
Z
=
y,
since
we want all points
to
lye in the ground plane. This way
expression
1
can be simplified into
2:
[
:]
=A3X3
[
4
]
Since we have
two
images from the same scene, one from
the right camera and another from the left camera, and
we can write:
From
3
we easily obtain the relation between match-
ing points in the left and right image:
Fig.
3:
Detailed
view
of
the stereoscopic setup.
Manipulating this equations and making
a3,3
=
1
we
can easily obtain
5:
(5)
This relation is
a
set of linear equations relating the
parameters
ai,j
with the coordinates
of
correspondent
points in the left and right images. Since we have more
unknowns than equations, we need a set of correspon-
dent point pairs,
at,
least,
8,
in order to make possible
the computation of all
ai,j.
If we admit that there is some amount
of
noise in
point coordinates, to enable robust results,
is
advisable
the use
of
a
larger set
of
points and
a
minimization cri-
terion, such
as
the least-squares criterion. Equation
5
can then be solved
by
using the Single Value Decompo-
sition
(SVD)
method.
A
hard problem to solve in the approach described
here is the noise introduced in several stages along the
process (e.g. ima.ge acquisition, correspondence errors,
etc)
.
Some mechanisms were implemented to remove
most
of
the noise while trying
to
avoid turning the
pro-
cess
t,oo
comput,at,ional intensive
for
real time imple-
ment ation.
ISIE'W
-
GuimarLs,
Portugal
Fig.
4:
First
pair:
image
pair
wit,li
obstacle
present.
Second
pair:
disparity
image
and
det,ected
obstacles.
Sample results obtained using the described method
are presented in Fig. 4.
To
make the obtained results useful for integration
with the ultrasonic readings,
a
further step must be
taken:
a
simple calibration is done in the
2
=
y
plane, relating points with ima.ge coordinates
(U,
U)
with points with
(X,
Y,
y)
coordinates in tjhe real world.
Again
SVD
is used to guarantee the calibrat,ion robust-
ness. After this calibration step, we can compute the
distance from the robot to a detected obstacle by find-
ing the coordinates in the
2
=
y
plane of image points
where disparity starts
to
be significant. This range in-
formation can then be easily integrded with data com-
ing from the ultrasonic sensors in an uniform represen-
tation.
11.
VISUAL
AND
ULTRASONIC SENSOR
DATA INTEGRATION
The use of several sensing modalit,ies takes us to the
problem of integrating the various data flows in
an
uni-
fied world representation.
In
our
approach the integra-
tion is supported by
a
cellular automata. This can be
pictured as
a
cellular lattice, where each cell
is
con-
nected to its nearest neighbors (Fig.
5).
The cells’ val-
ues are updated iteratively] and depend only on the val-
ues of each cell and its neighbors, and the connections
between them, accordingly with the automata dynamic
rules.
In
our
representation] each cell maps a
S
side square
of the environment and its value is proportional to the
certainty
of
the presence
of
an obstacle there.
This
certainty value is the result of data incoming from the
sensing devices and from the dynamics of the cellular
automata itself. This will lead to the following dynamic
rules for
our
cellular automata:
if
0,>3
=
.@,,3(t
=
0)
=
any
value;
1
and
K,]
>
P,,3
then
0,,3
=
0;
/I\
Aij
~
cell
at
position (ij) in
U=
cellular
automata.
Wp,q -Weight
of
Ule
connection
from
cell
Aij
to
cell
Ai+pJ+q
Fig.
5:
Connections
between
cells
in
our
cellular
automata.
@.‘,3(t+
1)
=
MAX
if
#
1;
with
(Y
being
a
parameter chosen to control the speed
at
which the cell’s values decrease away from obstacles,
@i,j(t)
t,he value of cell
Ai,j
at time instant
t,
Oi,j
a
boolean value indicating if cell
Ai,j
is being considered
occupied,
K,j
a
certainty value of the emptiness of cell
Ai,j
and
Pi,j
the certainty
of
the presence
of
an obstacle
in cell
Ai,j.
Pg,il
K,j
and
Oi,j
are modified by incoming
sensorial information and this way
@i,j(t)
will depend
not only on its neighbors values but also on the infor-
mation about the environment provided by the sensing
devices.
Taking it across all the mapped area, this is equiv-
alent to the result of an on-line computation
of
a
po-
tential field
[G,
91,
rising in the vicinity
of
obstacles and
decreasing away from them. Avoiding locations where
the representation presents high certainty values, the
robot will avoid colliding with obstacles. The result is
a
very flexible structure which enables
us
to integrate
in the same
2-D
low
level representation information
obtained from various sensing devices.
The dynamics
of
the structure itself performs data validation between
different sources and accumulated readings. Results for
a
room used during experiments, with two boxes in the
middle, are presented in Fig.
6.
111.
CONTROLLING
THE
ROBOT
A three layer feed-forward artificial neural network
has been trained to navigate the robot,
as
informa-
tion about the environment is being gathered and pro-
IEEE
Catalog
Number:
97TH8280
-
804
-
ISIE’W
-
Guimades,
Portugal
cessed
Fig.
6:
Mapping
an
experiineIita1
room.
n
the cellular automata, using that information
-
to
safely avoid obstacles and arrive at preseted goals
[l].
This neural network type has been often used
[8,9,7]
in mobile robotics
to
enable the robots to reproduce
simple, but intelligent, behaviors. We used it
to
teach
the robot simple heuristics, enabling it
to
navigate
through its environment avoiding the obstacles
in
his
way.
To
do this, the network inputs were fed with in-
formation collected in the environmental model being
created by integration of incoming sensorial
flows,
and
its output was
a
free direction that would allow the
robot to get nearer to its goal. The inforniation fed to
the network inputs, emulated
a
ring
of
virtual sensors
around the robot. When the controller asks for senso-
rial information, this virtual sensors obtain their range
readings from the world model being constructed, thus
being much more reliable than real sensors (e.g. just
using the ultrasonic sensors).
This simple controller is being used witJh success, in
cooperation with the model building system, to navi-
gate the robot in rooms with unknown structure. In
Fig.
7
we
can see the environmental model in construc-
tion while the robot is being navigated by the neural
controller.
IV.
CONCLUSIONS
AND
FUTUR.E
WORK
The main conclusion of our work is that different
sources of sensorial information can be integrated in
a
common structure
to
allow the robust modeling
of
the
environment in autonomous robot navigation. Experi-
ments with an isolated source of information shown
us
IEEE
Catalog
Number:
97TH8280
-
805
~
Fig.
7:
Navigating the
robot
in
a
real
room
ISE’97
-
Guimariies,
Portugal
that most of the encountered problems can be overcome
by the inclusion of more sensing devices.
In
our
particular setup, it was easy to conclude that
the use of ultrasonic ranging information alone was not
reliable (e.g. in very short distances) in result of the
sensors performances. Including vision, we believe that
robustness and flexibility were added, while validation
between different sources
of
information became avail-
able. In the other hand, ultrasonic information is very
fast
to acquire, while image processing is always prob-
lematic to implement in real time. This way, with the
use
of
ultrasonic sensors together with binocu1a.r active
vision we can be less demanding in processing power
to
deal with stereo image pairs.
Other side
of
our work which is worth mentioning
is the implementation of both the integration
of
senso-
rial information and the control mechanism using con-
nectionist structures. Neural networks and cellular au-
tomata both rely on the same principle
of
many simple
processing units densely interconnected. Implementa-
tion
of
this kind of technics can allow the fast process-
ing
of
large maps and the development of higher level
behaviors.
At this stage our prospects of future work are ma.inly
focused on the exploration
of
the richness of possibili-
ties given by the use
of
binocular active vision in mo-
bile robotics
[a,
3,
51.
Our initial setup only gives low
level obstacle detection information, but we hope to ex-
pand the range of information obtained from the flow
of stereo image pairs (e.g 3D info and optical flow).
In the near future we propose to extend the 2-D
model to
a
3-D one, again using the possibilities of
stereo vision. Acquiring higher level informat,ion than
simple obstacle presence from the model (e.g. obstacle
structure)
is
another issue being considered.
References
[l] A. Silva, P. Menezes,
J.
Dias,
H.
Aratijo. “A
Con-
nectionist Approach to Obstacle Detection and
Avoidance”, in
Proceedings
of
the
27th
Symposium
on Industrial Robots,
Milan, October 1996.
[2]
C.
Paredes,
J.
Dias, A. de Almeida. “Detecting
Movements Using Fixation”, in
Proceedings of the
2nd
Portuguese Conference on Automation, and
Control,
Oporto, September 1996.
[3] I.
Sousa,
J.
Dias,
H.
Araujo. “Normal Optical
Flow Based on Log-Polar Images”, in
Proceed-
ings
of
the International Symposium on Intelligent
Robotic Systems,
Lisbon, July 1996.
[4]
P.
Menezes,
J.
Dias,
H.
Araujo. “Low Cost Ob-
stacle Detection and Description”
,
Fourth Interna-
tional Symposium on Experimental Robotics,
Stan-
ford, June 1995.
[5]
J.
Dias,
C.
Paredes, I. Fonseca
et al.
“Simulat-
ing Pursuit with Machines”, In
Proceedings
of
the
1995
IEEE
Conference on Robotics and Automa-
tion,
Japan, 1995.
[6]
Christopher I. Connolly and Roderic A. Gru-
pen. ”Nonhlonomic Path Planning Using .Har-
monic Functions”
,
UMass
Amherst Computer Sci-
ence Department Techn,icul Report,
1994.
[7] R. Glasius, A. I<omodaand
S.
Gielen. ”Neural Net-
work Dynamics for Path Planning and Obstacle
Avoidance”
,
In
Neural Networks,
March 1994.
[8] D. Kontoravdis, A. Likas and AStafylopatis.
”
Collision-Free Movement of an Autonomous Ve-
hicle, Using Reinforcement Learning”, In
10th Eu-
ropean Conference on Artificial Intelligence,
1992.
[9]
E.
Plumer. ”Neural Network Structure for Nav-
igation Using Potential Fields”, In
Procedings
of
IEEE Conference on Robotics and Automation,
1992.
[lo]
J.
Borenstein. “Real-Time Obstacle Avoidance
for Fast Mobile Robots”, In
IEEE
Transactions
on Systems, Man, and Cybernetics,
Vol.
9,
N5,
September/October, 1989.
[ll]
A. Elfes. “Sonar-Based Real-World Mapping and
Navigation”, In
IEEE Journal of Robotics and Au-
tomation,
Vol. RA-3, N3, June 1987.
[la]
0.
Khatib. “Real-Time Obstacle Avoidance
for
Manipulators and Mobile Robots”
,
In
The Inter-
national Journal
of
Robotics Research,
Vol.
5,
NI,
Spring 1986.
IEEE
Catalog
Number:
97TH8280
-806-
ISIE’97
-
Guimarbs,
Portugal