Conference PaperPDF Available

Grid based navigation for autonomous robots-an algorithm based on the integration of vision and sonar data

Authors:

Abstract

In this article, visual data obtained by a binocular active vision system is integrated, together with ultrasonic range measurements, in the development of an obstacle detection and avoidance system based on a connectionist grid. This grid based framework allows the integration of two different sensing modalities (vision and sonar) in a way that incoming sensorial data can be mutually enhanced and validated. Each grid node maps a configuration in a discrete subset of the robot's configuration space. Detected obstacles result in sets of restricted configurations. Using the grid data, a potential field is interactively computed for each mapped configuration. This computation is done in real-time, during the robot's motion, so the potential field changes as new data is integrated into the grid. The authors also propose the implementation of some innovative sensing and control strategies, using the integrated sensorial information
Grid Based Navigation
For
Autonomous
Robots*
An Algorithm Based
on
the Integration
of
Vision and Sonar Data
A.
Silva
P.
Menezes,
J.
Dias
Departamento de Engenharia Electrot6cnica
Instituto de Sistemas e Rob6tica
Pinhal de Marrocos,
3030
Coimbra, Portugal
(arlindo, paulo, jorge}@isr.uc.pt
Abstract: In this article, visual data obtained
by
a
binocular active vision system
is
integrated,
together with ultrasonic range measurements,
in the development
of
an obstacle detection and
avoidance system based on a connectionist grid.
This grid based framework allows the integra-
tion
of
two different sensing modalities (vision
and sonar) in
a
way that incoming sensorial data
can be mutually enhanced and validated. Each
grid node maps a configuration in a discrete sub-
set
of
the robot’s configuration space. Detected
obstacles result in sets
of
restricted configura-
tions. Using tlie grid data, a potential field is
iteractively computed for each mapped configu-
ration. This computation is done in real-time,
during the robot’s motion,
so
the potential field
changes as new data is integrated into the grid.
We
also
propose the implementation
of
some
innovative sensing and control strategies, using
the integrated sensorial inforination.
I.
SENSING MODALITIES
Besides odometry, two main sensing modalities are used
in the process
of
environment mapping. Those are ul-
trasonic range data and information acquired through
binocular active vision.
A.
Ultrasonic Sensors Runge Measurements
The range measurements are obtained from a ring
of
24
ultrasonic sensors present in
our
mobile platforin. This
kind
of
sensors, being cheap and easy to use, frequently
present erroneous data, caused by multiple reflections
and sonar scattering. However, this information can
be validated, in one hand, through the accumulation
of
readings, and, in the other hand,
by
including other
modality
of
sensorial information [4].
For
the ultrasonic sensors
a
simple model is used: a
reading is represented by a cone where the arc points
have the
same
probability
of
belonging
to
an
obstacle,
being that probability inversely proportional to the dis-
tance measurement. A point inside the cone is consid-
‘This work was partially financed
by
NATO
Scientific Affairs
Division on the framework
of
the NATO Science
for
Stability
Program, PO-ROBOT project.
+Working
under
grant
PRAXIS XXI
BM/7255/95
from
JNICT- Junta Nacional de Investigacao Cientifica
e
Tecnologica
Fig.
1:
Mobile platform with stereoscopic
setup.
ered
to
have
a
probability
of
not belonging
to
an obsta-
cle and
t,his
probability is proportional
to
the distance
between the point and the cone’s arc. An illustration
of this principle is summarized in Fig.
2.
Using this technic we can speed up the update
of
this
grid-based representation with very good experimental
results and avoiding the usual gaussian representations
-
see Fig.
(3.
B.
Obstacle Detection
by
Binocular Disparity
Vision (Fig.
3)
is a powerful sensing modality which can
be used, not only in low level tasks, like the ones
of
ob-
stacle detection and avoidance presented here, but also
in higher level problems, e.g. object identification and
pursuit. In this work we are using stereoscopic vision as
a
mean
of
complement and va.lidate data obtained by
the ultrasonic sensors.
To
do this, we developed a sim-
ple algorithm for obstacle detection, using stereoscopic
information Its simplicity makes it useful
for
mobile
robotics, where the need for real time implementation
asks for algorithms with moderated computational de-
mands.
The solution found allows us
to
detect obstacles in
the field
of
view of the stereoscopic setup, both above
and below
the
robot’s
motion
plane. The algorithm
follows this simple principle:
if
we have a stereo pair
WEE
Catalog
Number:
97TH8280
-
802
-
ISJE’97
-
Guimarses, Portugal
v
=
reading
vufue
r
=
sonur
opening
Sonar
Fig.
2:
Model
for
an ultrasonic sensor reading.
of images
of
the ground plane, in front
of
the robot,
where
Z
is constant, and we can establish an affine
transformation which relates each point in one image,
with the correspondent point in the otjher image, for
any image pair with points not in the ground plane,
the transformation will fail. This means that, any ob-
stacle in the field
of
view, above
or
bellow the ground
plane, will result in a disparity region when the two im-
ages are matched using the transforination mentioned
above. When the robot is moving, any relevant dispar-
ities between image pairs are identified
as
an obstacle.
The mathematics underlying this principle are very
simple.
A
real world point with
(X,
Y,
Z)
coordinates
is related to its image representation, with coordinates
(U,
w),
by a
3
x
4
transformation matrix
as
expressed
in
1.
IEEE
Catalog
Number:
97TH8280
-
803
-
From
our
particular setup we have that
Z
=
y,
since
we want all points
to
lye in the ground plane. This way
expression
1
can be simplified into
2:
[
:]
=A3X3
[
4
]
Since we have
two
images from the same scene, one from
the right camera and another from the left camera, and
we can write:
From
3
we easily obtain the relation between match-
ing points in the left and right image:
Fig.
3:
Detailed
view
of
the stereoscopic setup.
Manipulating this equations and making
a3,3
=
1
we
can easily obtain
5:
(5)
This relation is
a
set of linear equations relating the
parameters
ai,j
with the coordinates
of
correspondent
points in the left and right images. Since we have more
unknowns than equations, we need a set of correspon-
dent point pairs,
at,
least,
8,
in order to make possible
the computation of all
ai,j.
If we admit that there is some amount
of
noise in
point coordinates, to enable robust results,
is
advisable
the use
of
a
larger set
of
points and
a
minimization cri-
terion, such
as
the least-squares criterion. Equation
5
can then be solved
by
using the Single Value Decompo-
sition
(SVD)
method.
A
hard problem to solve in the approach described
here is the noise introduced in several stages along the
process (e.g. ima.ge acquisition, correspondence errors,
etc)
.
Some mechanisms were implemented to remove
most
of
the noise while trying
to
avoid turning the
pro-
cess
t,oo
comput,at,ional intensive
for
real time imple-
ment ation.
ISIE'W
-
GuimarLs,
Portugal
Fig.
4:
First
pair:
image
pair
wit,li
obstacle
present.
Second
pair:
disparity
image
and
det,ected
obstacles.
Sample results obtained using the described method
are presented in Fig. 4.
To
make the obtained results useful for integration
with the ultrasonic readings,
a
further step must be
taken:
a
simple calibration is done in the
2
=
y
plane, relating points with ima.ge coordinates
(U,
U)
with points with
(X,
Y,
y)
coordinates in tjhe real world.
Again
SVD
is used to guarantee the calibrat,ion robust-
ness. After this calibration step, we can compute the
distance from the robot to a detected obstacle by find-
ing the coordinates in the
2
=
y
plane of image points
where disparity starts
to
be significant. This range in-
formation can then be easily integrded with data com-
ing from the ultrasonic sensors in an uniform represen-
tation.
11.
VISUAL
AND
ULTRASONIC SENSOR
DATA INTEGRATION
The use of several sensing modalit,ies takes us to the
problem of integrating the various data flows in
an
uni-
fied world representation.
In
our
approach the integra-
tion is supported by
a
cellular automata. This can be
pictured as
a
cellular lattice, where each cell
is
con-
nected to its nearest neighbors (Fig.
5).
The cells’ val-
ues are updated iteratively] and depend only on the val-
ues of each cell and its neighbors, and the connections
between them, accordingly with the automata dynamic
rules.
In
our
representation] each cell maps a
S
side square
of the environment and its value is proportional to the
certainty
of
the presence
of
an obstacle there.
This
certainty value is the result of data incoming from the
sensing devices and from the dynamics of the cellular
automata itself. This will lead to the following dynamic
rules for
our
cellular automata:
if
0,>3
=
.@,,3(t
=
0)
=
any
value;
1
and
K,]
>
P,,3
then
0,,3
=
0;
/I\
Aij
~
cell
at
position (ij) in
U=
cellular
automata.
Wp,q -Weight
of
Ule
connection
from
cell
Aij
to
cell
Ai+pJ+q
Fig.
5:
Connections
between
cells
in
our
cellular
automata.
@.‘,3(t+
1)
=
MAX
if
#
1;
with
(Y
being
a
parameter chosen to control the speed
at
which the cell’s values decrease away from obstacles,
@i,j(t)
t,he value of cell
Ai,j
at time instant
t,
Oi,j
a
boolean value indicating if cell
Ai,j
is being considered
occupied,
K,j
a
certainty value of the emptiness of cell
Ai,j
and
Pi,j
the certainty
of
the presence
of
an obstacle
in cell
Ai,j.
Pg,il
K,j
and
Oi,j
are modified by incoming
sensorial information and this way
@i,j(t)
will depend
not only on its neighbors values but also on the infor-
mation about the environment provided by the sensing
devices.
Taking it across all the mapped area, this is equiv-
alent to the result of an on-line computation
of
a
po-
tential field
[G,
91,
rising in the vicinity
of
obstacles and
decreasing away from them. Avoiding locations where
the representation presents high certainty values, the
robot will avoid colliding with obstacles. The result is
a
very flexible structure which enables
us
to integrate
in the same
2-D
low
level representation information
obtained from various sensing devices.
The dynamics
of
the structure itself performs data validation between
different sources and accumulated readings. Results for
a
room used during experiments, with two boxes in the
middle, are presented in Fig.
6.
111.
CONTROLLING
THE
ROBOT
A three layer feed-forward artificial neural network
has been trained to navigate the robot,
as
informa-
tion about the environment is being gathered and pro-
IEEE
Catalog
Number:
97TH8280
-
804
-
ISIE’W
-
Guimades,
Portugal
cessed
Fig.
6:
Mapping
an
experiineIita1
room.
n
the cellular automata, using that information
-
to
safely avoid obstacles and arrive at preseted goals
[l].
This neural network type has been often used
[8,9,7]
in mobile robotics
to
enable the robots to reproduce
simple, but intelligent, behaviors. We used it
to
teach
the robot simple heuristics, enabling it
to
navigate
through its environment avoiding the obstacles
in
his
way.
To
do this, the network inputs were fed with in-
formation collected in the environmental model being
created by integration of incoming sensorial
flows,
and
its output was
a
free direction that would allow the
robot to get nearer to its goal. The inforniation fed to
the network inputs, emulated
a
ring
of
virtual sensors
around the robot. When the controller asks for senso-
rial information, this virtual sensors obtain their range
readings from the world model being constructed, thus
being much more reliable than real sensors (e.g. just
using the ultrasonic sensors).
This simple controller is being used witJh success, in
cooperation with the model building system, to navi-
gate the robot in rooms with unknown structure. In
Fig.
7
we
can see the environmental model in construc-
tion while the robot is being navigated by the neural
controller.
IV.
CONCLUSIONS
AND
FUTUR.E
WORK
The main conclusion of our work is that different
sources of sensorial information can be integrated in
a
common structure
to
allow the robust modeling
of
the
environment in autonomous robot navigation. Experi-
ments with an isolated source of information shown
us
IEEE
Catalog
Number:
97TH8280
-
805
~
Fig.
7:
Navigating the
robot
in
a
real
room
ISE’97
-
Guimariies,
Portugal
that most of the encountered problems can be overcome
by the inclusion of more sensing devices.
In
our
particular setup, it was easy to conclude that
the use of ultrasonic ranging information alone was not
reliable (e.g. in very short distances) in result of the
sensors performances. Including vision, we believe that
robustness and flexibility were added, while validation
between different sources
of
information became avail-
able. In the other hand, ultrasonic information is very
fast
to acquire, while image processing is always prob-
lematic to implement in real time. This way, with the
use
of
ultrasonic sensors together with binocu1a.r active
vision we can be less demanding in processing power
to
deal with stereo image pairs.
Other side
of
our work which is worth mentioning
is the implementation of both the integration
of
senso-
rial information and the control mechanism using con-
nectionist structures. Neural networks and cellular au-
tomata both rely on the same principle
of
many simple
processing units densely interconnected. Implementa-
tion
of
this kind of technics can allow the fast process-
ing
of
large maps and the development of higher level
behaviors.
At this stage our prospects of future work are ma.inly
focused on the exploration
of
the richness of possibili-
ties given by the use
of
binocular active vision in mo-
bile robotics
[a,
3,
51.
Our initial setup only gives low
level obstacle detection information, but we hope to ex-
pand the range of information obtained from the flow
of stereo image pairs (e.g 3D info and optical flow).
In the near future we propose to extend the 2-D
model to
a
3-D one, again using the possibilities of
stereo vision. Acquiring higher level informat,ion than
simple obstacle presence from the model (e.g. obstacle
structure)
is
another issue being considered.
References
[l] A. Silva, P. Menezes,
J.
Dias,
H.
Aratijo. “A
Con-
nectionist Approach to Obstacle Detection and
Avoidance”, in
Proceedings
of
the
27th
Symposium
on Industrial Robots,
Milan, October 1996.
[2]
C.
Paredes,
J.
Dias, A. de Almeida. “Detecting
Movements Using Fixation”, in
Proceedings of the
2nd
Portuguese Conference on Automation, and
Control,
Oporto, September 1996.
[3] I.
Sousa,
J.
Dias,
H.
Araujo. “Normal Optical
Flow Based on Log-Polar Images”, in
Proceed-
ings
of
the International Symposium on Intelligent
Robotic Systems,
Lisbon, July 1996.
[4]
P.
Menezes,
J.
Dias,
H.
Araujo. “Low Cost Ob-
stacle Detection and Description”
,
Fourth Interna-
tional Symposium on Experimental Robotics,
Stan-
ford, June 1995.
[5]
J.
Dias,
C.
Paredes, I. Fonseca
et al.
“Simulat-
ing Pursuit with Machines”, In
Proceedings
of
the
1995
IEEE
Conference on Robotics and Automa-
tion,
Japan, 1995.
[6]
Christopher I. Connolly and Roderic A. Gru-
pen. ”Nonhlonomic Path Planning Using .Har-
monic Functions”
,
UMass
Amherst Computer Sci-
ence Department Techn,icul Report,
1994.
[7] R. Glasius, A. I<omodaand
S.
Gielen. ”Neural Net-
work Dynamics for Path Planning and Obstacle
Avoidance”
,
In
Neural Networks,
March 1994.
[8] D. Kontoravdis, A. Likas and AStafylopatis.
Collision-Free Movement of an Autonomous Ve-
hicle, Using Reinforcement Learning”, In
10th Eu-
ropean Conference on Artificial Intelligence,
1992.
[9]
E.
Plumer. ”Neural Network Structure for Nav-
igation Using Potential Fields”, In
Procedings
of
IEEE Conference on Robotics and Automation,
1992.
[lo]
J.
Borenstein. “Real-Time Obstacle Avoidance
for Fast Mobile Robots”, In
IEEE
Transactions
on Systems, Man, and Cybernetics,
Vol.
9,
N5,
September/October, 1989.
[ll]
A. Elfes. “Sonar-Based Real-World Mapping and
Navigation”, In
IEEE Journal of Robotics and Au-
tomation,
Vol. RA-3, N3, June 1987.
[la]
0.
Khatib. “Real-Time Obstacle Avoidance
for
Manipulators and Mobile Robots”
,
In
The Inter-
national Journal
of
Robotics Research,
Vol.
5,
NI,
Spring 1986.
IEEE
Catalog
Number:
97TH8280
-806-
ISIE’97
-
Guimarbs,
Portugal
... Much research has addressed the problem of accurately representing an environment in which a robot traverses, [1], [2], [3], [6], [7]. One method uses a grid map to parcel the territory into smaller, more manageable cells. ...
Article
For a vehicle to avoid obstacles, it must first be able to detect them. To this extent, the area around the vehicle is tessellated into a local grid. Where the value at each grid location represents the probability of occupancy. The range sensors are characterized by two methods. The first uses the physical properties to determine which grid cells should be updated. The second uses fuzzy modeling of the sensor's uncertainty to actually update the grid cell using a nonhomogenous Markov chain. This paper will present the theoretical approach together with experimental results.
... The authors in [28] implemented a sonar and vision fusion-based obstacle detection algorithm. Bionocular vision using dis- parity estimation integrated with sonar was implemented by Silva et al. [29]. Yate et al. [30] developed a localization sys- tem incorporating fused omni-directional sonar with omni- directional vision. ...
Conference Paper
In modern society, robots are been designed to play an increasing role in the lives of ordinary people. Among the emerging areas in robotics is the field of service robots. This paper describes a mobile robotic assistant, named 'LUCAS', Limerick University Computerised Assistive System that is currently been developed to assist individuals within a library environment while also socially interacting with them. Human-Robot interaction is initiated through a 3-D animated character displayed on the robots onboard p.c. A continuous localisation process is described which relies on monocular vision and ultrasonic range readings. The process involves dividing the navigable space into localisation variant regions, and employ's methods of landmark feature extraction, vanishing point estimation and ultrasonic pattern detection to localise the robot within each region.
Article
Full-text available
Navigation is one of the major challenges and a key ability for the mobile robots to accomplish the given task. It is a two point problem where the accurate positioning and path planning is necessary. The efficient path planning and positioning with the use of conventional sensors becomes cumbersome when the robots are resource constrained and uses minimal information of the environment. Therefore, a simple heuristic navigation algorithm based on grids is addressed in this paper. A simple visual perception method with a use of line scan camera is also employed in this work, as an alternative to the existing methods with conventional sensors, towards the identification of grids and lines in the workspace. The efficiency of the proposed technique is investigated experimentally by applying on a miniature wheeled mobile robot "Grid-bot" and the results are discussed. The results reveals that the proposed method be a viable alternative for a resource constrained robot to plan its path and navigate the environment efficiently in indoor applications.
Article
This paper describes an autonomous mobile device that was designed, developed and implemented as a library assistant robot. A complete autonomous system incorporating human–robot interaction has been developed and implemented within a real world environment. The robotic development is comprehensively described in terms of its localization systems, which incorporates simple image processing techniques fused with odometry and sonar data, which is validated through the use of an extended Kalman filter (EKF). The essential principles required for the development of a successful assistive robot are described and put into demonstration through a human–robot interaction application applied to the library assistant robot.
Article
Full-text available
This work presents a sensor system develop for the robot ALDURO (Antropomorphically Legged and Wheeled Duisburg Robot), in order to allow it to detect and avoid obstacles when moving in unstructured terrains. The robot is a large-scale hydraulically driven 4-legged walking-machine, developed at the Duisburg-Essen University, with 16 degrees of freedom at each leg and will be steered by an operator sitting in a cab on the robot body. The Cartesian operator instructions are processed by a control computer, which converts them into appropriate autonomous leg movements, what makes necessary that the robot automatically recognizes the obstacles (rock, trunks, holes, etc.) on its way, locates and avoids them. A system based on ultra-sound sensors was developed to carry this task on, but there are intrinsic problems with such sensors, concerning to their poor angular precision. To overcome that, a fuzzy model of the used ultra-sound sensor, based on the characteristics of the real one, was developed to include the uncertainties about the measures. A posterior fuzzy inference builds from the measured data a map of the robot’s surroundings, to be used as input to the navigation system. This whole sensor system was implemented at a test stand, where a real size leg of the robot is fully functional. The sensors are assembled in an I2C net, which uses a micro-controller as interface to the main controller (a personal computer). That enables to relieve the main controller of some data processing, which is carried by the microcontroller on. The sensor system was tested together with the fuzzy data inference, and different arrangements to the sensors and settings of the inference system were tried, in order to achieve a satisfactory result.
Chapter
Full-text available
This article describes an algorithm for detection of obstacles, through the use of low cost sonar sensors, for the navigation of mobile robots. The use of a grid representation for the cartesian space, associated to probabilistic operations on it, allowed the implementation of a fast, simple and robust algorithm for obstacle detection. This algorithm runs always that the mobile robot navigates, and allows to overcome obstacles that appear unexpectedly in the robot's path. The information used to represent the obstacles is based in the information provided by sonar sensors. A stochastic model for the sonar sensors and their activity, associated to the platform motion, are explored by the algorithm. Some experimental results obtained through the use this algorithm are described at the end of the article.
Article
Full-text available
A sonar-based mapping and navigation system developed for an autonomous mobile robot operating in unknown and unstructured environments is described. The system uses sonar range data to build a multileveled description of the robot's surroundings. Sonar readings are interpreted using probability profiles to determine empty and occupied areas. Range measurements from multiple points of view are integrated into a sensor-level sonar map, using a robust method that combines the sensor information in such a way as to cope with uncertainties and errors in the data. The resulting two-dimensional maps are used for path planning and navigation. From these sonar maps, multiple representations are developed for various kinds of problem-solving activities. Several dimensions of representation are defined: the abstraction axis, the geographical axis, and the resolution axis. The sonar mapping procedures have been implemented as part of an autonomous mobile robot navigation system called Dolphin. The major modules of this system are described and related to the various mapping representations used. Results from actual runs are presented, and further research is mentioned. The system is also situated within the wider context of developing an advanced software architecture for autonomous mobile robots.
Article
Full-text available
A method is presented for planning obstacle-avoiding paths for a system which exhibits nonholonomic constraints. The method is based on the use of harmonic functions. Linear constraints on the velocity of a nonholonomic system can be directly expressed as Neumann boundary conditions for a harmonic function. Such boundary conditions are easily represented in a resistive network. The resulting potential represents an integration of nonholonomic constraints over an admissible subset of configuration space. The method is applied to path planning for simple wheeled vehicles. 1 Introduction This paper draws on the relationship between harmonic functions and resistive networks to propose a new method for planning the motion of nonholonomic systems. Simple nonholonomic constraints [1] restrict the velocity of mechanical systems to a linear subspace of their configuration space. These constraints bear a resemblance to the Neumann boundary condition for harmonic functions. Although nonho...
Article
Full-text available
A new real-time obstacle avoidance method for mobile robots has been developed and implemented. This method, named the Vector Field Histogram (VFH), permits the detection of unknown obstacles and avoids collisions while simultaneously steering the mobile robot toward the target. A VFH-controlled mobile robot maneuvers quickly and without stopping among densely cluttered obstacles. The VFH method uses a two-dimensional Cartesian Histogram Grid as a world model. This world model is updated continuously and in real-time with range data sampled by the onboard ultrasonic range sensors. Based on the accumulated environmental data, the VFH method then computes a one-dimensional Polar Histogram that is constructed around the robot's momentary location. Each sector in the Polar Histogram holds the polar obstacle density in that direction. Finally, the algorithm selects the most suitable sector from among all Polar Histogram sectors with low obstacle density, and the steering of the robot is aligned with that direction. Experimental results from a mobile robot traversing a densely cluttered obstacle course at an average speed of 0.7 m/sec demonstrate the power of the VFH method. 1.
Article
This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an upper speed limit. The artificial potential field ap proach has been extended to collision avoidance for all ma nipulator links. In addition, a joint space artificial potential field is used to satisfy the manipulator internal joint con straints. This method has been implemented in the COSMOS system for a PUMA 560 robot. Real-time collision avoidance demonstrations on moving obstacles have been performed by using visual sensing.
Article
A model of a topologically organized neural network of a Hopfield type with nonlinear analog neurons is shown to be very effective for path planning and obstacle avoidance. This deterministic system can rapidly provide a proper path, from any arbitrary start position to any target position, avoiding both static and moving obstacles of arbitrary shape. The model assumes that an (external) input activates a target neuron, corresponding to the target position, and specifies obstacles in the topologically ordered neural map. The path follows from the neural network dynamics and the neural activity gradient in the topologically ordered map. The analytical results are supported by computer simulations to illustrate the performance of the network.
Article
A hybrid-network method for obstacle avoidance in the truck-backing system of D. Nguyen and B. Widrow (1989) is presented. A neural network technique for vehicle navigation and control in the presence of obstacles has been developed. A potential function which peaks at the surface of obstacles and has its minimum at the proper vehicle destination is computed using a network structure. The field is guaranteed not to have spurious local minima and does not have the property of flattening-out far from the goal. A feedforward neural network is used to control the steering of the vehicle using local field information. The network is trained in an obstacle-free space to follow the negative gradient of the field, after which the network is able to control and navigate the truck to its target destination in a space of obstacles which may be stationary or movable.
Normal Optical Flow Based on Log-Polar Images
  • I Sousa
  • J Dias
  • H Araujo
I. Sousa, J. Dias, H. Araujo. “Normal Optical Flow Based on Log-Polar Images”, in Proceed-ings of the International Symposium on Intelligent Robotic Systems, Lisbon, July 1996.