ArticlePDF Available

Real-Time Autonomous Vehicle Localization Based on Particle and Unscented Kalman Filters

Authors:

Abstract and Figures

In this paper, a real-time Monte Carlo localization (RT_MCL) method for autonomous cars is proposed. Unlike the other localization approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main contribution. The RT_MCL method is based on the fusion of lidar and radar measurement data for object detection, a pole-like landmarks probabilistic map and a tailored particle filter for pose estimation. The lidar and radar are fused using the unscented Kalman filter (UKF) to provide pole-like static-object pose estimations that are well suited to serve as landmarks for vehicle localization in urban environments. These pose estimations are then clustered using the grid-based density-based spatial clustering of applications with noise algorithm to represent each pole landmark in the form of a source-point model to reduce computational cost and memory requirements. A reference map that includes pole landmarks is generated offline and extracted from a 3-D lidar to be used by a carefully designed particle filter for accurate ego-car localization. The particle filter is initialized by the fused GPS + IMU measurements and used an ego-car motion model to predict the states of the particles. The data association between the estimated landmarks by the UKF and that in the reference map is performed using the iterative closest point algorithm. The RT_MCL is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. Extensive simulation studies have been carried out to evaluate the performance of the RT_MCL in both longitudinal localization and lateral localization.
Content may be subject to copyright.
Vol.:(0123456789)
1 3
Journal of Control, Automation and Electrical Systems
https://doi.org/10.1007/s40313-020-00666-w
Real‑Time Autonomous Vehicle Localization Based onParticle
andUnscented Kalman Filters
WaelFarag1,2
Received: 19 May 2020 / Revised: 17 August 2020 / Accepted: 1 November 2020
© Brazilian Society for Automatics--SBA 2021
Abstract
In this paper, a real-time Monte Carlo localization (RT_MCL) method for autonomous cars is proposed. Unlike the other
localization approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main
contribution. The RT_MCL method is based on the fusion of lidar and radar measurement data for object detection, a pole-
like landmarks probabilistic map and a tailored particle filter for pose estimation. The lidar and radar are fused using the
unscented Kalman filter (UKF) to provide pole-like static-object pose estimations that are well suited to serve as landmarks
for vehicle localization in urban environments. These pose estimations are then clustered using the grid-based density-based
spatial clustering of applications with noise algorithm to represent each pole landmark in the form of a source-point model
to reduce computational cost and memory requirements. A reference map that includes pole landmarks is generated offline
and extracted from a 3-D lidar to be used by a carefully designed particle filter for accurate ego-car localization. The par-
ticle filter is initialized by the fused GPS + IMU measurements and used an ego-car motion model to predict the states of
the particles. The data association between the estimated landmarks by the UKF and that in the reference map is performed
using the iterative closest point algorithm. The RT_MCL is implemented using the high-performance language C++ and
utilizes highly optimized math and optimization libraries for best real-time performance. Extensive simulation studies have
been carried out to evaluate the performance of the RT_MCL in both longitudinal localization and lateral localization.
Keywords Sensor fusion· Kalman filter· Localization· Monte Carlo· Particle filter· Autonomous driving
1 Introduction
Improving safety (Farag 2019a), lowering road accidents,
boosting energy efficiency, enhancing comfort, and enrich-
ing driving experience (Farag 2019b) are the most impor-
tant driving forces behind equipping present-day cars with
Advanced Driving Assistance Systems (ADASs) (Farag and
Saleh 2018a). Many ADAS functions represent incremental
steps toward a hypothetical future of safe fully autonomous
cars (Farag 2020a).
Future ADAS and autonomous vehicles require accu-
rate and reliable self-localization systems (Farag and Saleh
2019a). Accuracy is required because an exact pose estimate
plays a major role in enabling state-of-the-art ADAS func-
tionalities such as the automated lane following (Farag
and Saleh 2019b) or collision avoidance (Farag and Saleh
2018b). Reliability is required because the quality of the
pose estimate must be preserved even in changing environ-
mental factors to maintain the highest safety levels (Farag
2019).
Localization based on satellite systems is available for
decades and undergone several improvements (Woo etal.
2019). Recent systems like RTK-GPS or DGPS present an
efficient solution since they achieve centimeter-level accu-
racy without being augmented by any additional algorithms
(Osterwood and Noble 2017). However, there are several
doubts about their reliability (Modsching etal. 2006). In
urban areas, buildings or other tall road objects that obstruct
the line of sight between the vehicle and the satellites can
decrease the accuracy to several meters (Carlevaris-Bianco
etal. 2015). Therefore, employing reference dense maps
like polygon meshes, grid maps, or point clouds repre-
sents an alternative localization solution with a higher
* Wael Farag
wael.farag@aum.edu.kw; wael.farag@cu.edu.eg
1 College ofEngineering andTechnology, American
University oftheMiddle East, Egaila, Kuwait
2 Electrical Engineering Department, Cairo University, Giza,
Egypt
Journal of Control, Automation and Electrical Systems
1 3
reliability (Levinson and Thrun 2010). However, map-based
approaches have a major downside of requiring a huge size
of memory that immediately becomes unaffordable if larger-
scale maps to be used. To overcome this problem, what is
called “landmark maps” attracted a lot of interest (Schaefer
etal. 2019). In these maps, massive amounts of raw sensory
data points (lidar, radar, and/or camera) are condensed into
a relatively much small number of distinct and marked fea-
tures. This way the required memory usage can be decreased
by several orders of magnitude (Kummerle etal. 2019).
Localization of self-driving cars (SDC) using global navi-
gation satellite systems (GNSS) is not enough as it results in
lateral errors of around 5 m (Miura etal. 2015) (can reach
1m in the best covered areas). SDC requires localization of
lateral/longitudinal errors of less than 0.3 m from the ground
truth (even only 0.1 m is highly recommended) (Levinson
and Thrun 2010). The fusion of GPS and IMU improves
the localization errors, especially in urban canyons or tun-
nels, but the results are far from the requirement (reaches
1.77 m (Lee etal. 2015)) and still not reliable enough for
serious SDC deployment. Therefore, the GP/IMU fusion
needs to be augmented with perception sensors and digital
maps to further bring down (Jo etal. 2015) the location
error to the required target (Alrousan etal. 2020). How-
ever, this augmentation especially while processing digitized
detailed maps results in a highly computational overhead and
expensive hardware requirements that make it not suitable
for affordable ADAS or autonomous driving functionalities
(Schaefer etal. 2019; Sefati etal. 2017).
In this work, a vehicle localization approach, referred
to as RT_MCL, for urban environments is proposed. The
RT_MCL relies on pole-like landmarks extracted from lidar/
radar fused data. Pole-like landmarks typically take several
forms such as traffic signs, telegraph poles, tree trunks,
street lamps and bollards. Several features make them very
convenient and reliable in vehicle localization: (a) they are
abundant and widespread in urban areas, (b) they rarely get
dislocated and stay in their locations for a long time and (c)
they have well-defined rounded geometrical shapes that do
not get affected with season or weather.
The RT_MCL matches the detected poles by the ego-
car sensors with that being registered in a global reference
map. This high-resolution map is being developed precisely
offline using lidar/radar data covering the same which the
ego-car will drive on. The environment in the reference map
is modeled, instead of as spatial poles of fixed positions,
as probabilistic poles whereby every pole is represented by
its Gaussian distribution over its position-prospect values.
Consequently, Bayesian inference (employed by a particle
filter) can favorably weight parts of the map most likely to
be relevant to the ego-car pose, thereby reducing uncertainty
and catastrophic errors. Moreover, to increase the precision
of poles detection the RT_MCL uses the unscented Kalman
filter (UKF) for sensors data fusion to combine the merits of
both lidar and radar and reduces scattering.
In contemporary research, several endeavors are carried
out to address the problem of vehicle localization (Farag
2020b) utilizing pole-like landmarks extracted from lidar
point clouds. This problem can be divided into two subprob-
lems: (a) the first one is the pole detection and its position
estimation and (b) the second one is the pole-based ego-car
pose estimation. For example, Weng etal. (2018) developed
a pole detector by partitioning the space around the ego-car
and counted the number of reflected scan points per voxel.
Detecting poles is done by identifying the vertically con-
nected stacks of voxels which all exceed a certain pre-spec-
ified threshold. Furthermore, the detector fits a cylinder to
all the points associated with the identified stacks of voxels
using RANSAC (Fischler and Bolles 1981). A particle filter
combined with the nearest neighborhood data association is
then used for ego-car pose estimation. Another example is
that Sefati etal. (2017) focused his approach of pole detec-
tion on removing the ground plane from the point cloud
received from sensors. The remaining points are projected
onto a horizontal regular grid, and then, the neighboring
cells are clustered based on occupancy and height and after-
ward fit a cylinder to each cluster. The data association is
done using the nearest neighborhood, and the pose estima-
tion is performed using a particle filter. Kummerle etal.
(2019) refine further the pose estimate by augmenting the
Sefati etal.’s pole detection method (2017) by fitting planes
to building facades constructed from point clouds and fitting
lines to lane markings in stereo camera images. Kummerle
etal. (2019) adopt a Monte Carlo method to solve the data
association problem but use nonlinear least-squares optimi-
zation to refine the estimated pose. Schaefer etal. (2019)
present a complete landmark-based localization system that
relies on poles extracted from 3-D lidar data and is divided
into three modules: the pole extractor, the mapping mod-
ule, and the localization module. The pole detector does not
only consider the laser ray endpoints, but also the free space
in between the laser sensor and the endpoints, and demon-
strate a reliable and accurate vehicle localization based on a
map of pole landmarks on large time scales. The approach
is tested on a long-term dataset that contains 35h of data
recorded over 15months—including varying routes, con-
struction zones, seasonal and weather changes and lots of
dynamic objects.
However, all the mentioned techniques are focusing on
accuracy of pose estimation and precision of pole-landmark
detection, while using extremely powerful hardware that in
many cases not suitable for real deployment is ADAS or
autonomous driving functionalities. Therefore, the com-
plexity of the execution of the mentioned algorithms is not
well addressed. In this paper, a balanced treatment of both
accuracy and real-time performance is considered in the
Journal of Control, Automation and Electrical Systems
1 3
proposed localization algorithm (RT_MCL). To achieve
this goal, the following intermediary steps are implemented:
1. Tailoring the UKF algorithm to fuse multiple radars
and lidars data to reduce scattering and achieve more
accurate pose estimates for stationary pole-like objects
around the ego-car in real time.
2. Tailoring the PF algorithm to employ pole-like land-
marks for ego-car pose estimation in real time.
3. Employing a high-order generic object motion model
(5 state variables that suit the most common road
objects in the development of the UKF and PF to gen-
erate more accurate estimates and improve the overall
performance.
4. Representing the poles in the reference map in a prob-
abilistic form that allows Bayesian inference imple-
mented by the PF to contain map uncertainties and
reduce localization errors.
5. Evaluating the gain of fusion by testing the UKF on
three different cases (lidar + radar, lidar only, and radar
only).
6. Evaluating the PF performance using different particle
populations and under various map uncertainties.
7. Evaluating the real-time performance of both the UKF
and PF on a moderate computational platform.
8. Employing the GB-DBSCAN clustering algorithm to
detect potential objects from the lidar/radar fused data,
and finding their centroids.
9. Employing the RANSAC algorithm to extract the pole
parameters by fitting circles (which represent poles
shape) to the clusters identified by the GB-DBSCAN.
10. Employing the ICP algorithm for the data association
between the detected poles in sensors data and the reg-
istered poles in the reference map.
2 The RT_MCL Method Overview
The flowchart of the proposed RT_MCL autonomous car
localization method is shown in Fig.1. The input to the
proposed algorithm can be sorted into four items:
(a) The reading of a global navigation satellite systems
(GNSSs) such as GPS integrated with that of an inertial
motion units (IMU) to provide the initial estimate of the
ego-car pose. The GPS provides a low accuracy initial
position, and the IMU provides the incremental change
in this position as well as the heading angle estimate.
The main principle of GPS-IMU fusion is correcting
accumulated errors of dead reckoning in intervals with
absolute position readings (Suhr etal. 2017). The fused
output is used in the initialization step of the particle
filter.
(b) The odometry readings in terms of the ego-car velocity
and yaw rate are filtered from high-frequency noise and
used as a control input to the particle filter.
Fig. 1 RT_MCL roadmap
GPS/IMU
Measurements
Start
Lidar/Radar
Measurements
Odometry
Speed/Steering
Sensor
Fusion
Particle Filter
EgoCar
Pose
Reference
Map
Pole-like
Landmarks
UKF Sensor
Fusion
Noise
Filtering
GB-DBSCAN
Clustering
Data Association
ICP
Journal of Control, Automation and Electrical Systems
1 3
(c) The lidar/radar measurement data are fused using the
proposed UKF to detect objects in the ego-car sur-
roundings. The fused data are then clustered using the
GB-DBSCAN algorithm (Dietmayer etal. 2012) to
identify potential objects. The algorithm is tuned in a
way to identify more likely pole-like static objects. The
Doppler velocity of radar detections is used to discard
detections originating from moving objects.
(d) An offline high-definition 3-D point cloud reference
map with identified pole-like landmarks coordinates is
used as an input to the RT_MCL localizer. The identi-
fied pole-like objects by the GB-DBSCAN algorithm
are then aligned with the identified pole-like landmarks
in a process called “data association.” In other words,
the association of the coordinates of the poles resulted
from the clustering with that from the map using the
iterative closest point (ICP) algorithm (Lu and Milios
1997). This association is very crucial for localization
accuracy.
The core of the RT_MCL localizer is the particle filter
(Thrun 2002), which will be explained later in detail. The
particle filter uses the four inputs to determine the pose of
the ego-car with much higher accuracy than the one received
from the GPS/IMU fusion.
3 The Unscented Kalman Filter Overview
The Kalman filter (KF) (Zarchan and Musoff 2013) is a
system of equations working together to form a predictor-
update cyclic optimal estimator that minimizes the estimated
error covariance. The KF estimates the state
xRn
given
the measurement
zRm
of a discrete-time controlled pro-
cess that is modeled by a set of linear stochastic difference
equations.
However, as KF is only limited to linear processes,
accordingly, it is not suitable to the radar measurement pro-
cess which is inherently nonlinear. Therefore, the unscented
Kalman filter is introduced (Wan and Van Der Merwe 2000)
to overcome this limitation. The UKF is a derivative-free
alternative to extended Kalman filter (EKF) (Einicke and
White 1999) that uses a deterministic sampling approach.
The UKF utilizes as well the predict-update two-step pro-
cess; however, they are now augmented with further steps
like generation and prediction of sigma points as shown in
Fig.2.
In the UKF process, the state Gaussian distribution is
represented using a minimal set of carefully chosen sample
points, called sigma points.
nx=2n+1
sigma points are
selected based on the following rule:
where
Xk
is the sigma-point matrix which includes
nx
sigma-
point vectors,
Pk
is the KF process estimate covariance, and
𝜆
is a design parameter that determines the spread of the gen-
erated sigma points and usually takes the form
𝜆=3nx
.
In the sigma-point prediction step, each generated sigma
point is inserted in the UKF nonlinear process model given
in Eq.(2) to produce the matrix of the predicted (estimated)
sigma points, which has an
dimension.
where
𝜈k
is the process white noise which is the Gaussian
distribution (
N
) with zero mean and covariance matrix
Qk
.
In the next step, the predicted state-mean and covariance
matrices are calculated from the predicted sigma points as
given in Eq.(3):
where
wi
’s are the sigma-point weights that are used here to
invert the spreading of the sigma points. These weights are
calculated as shown in Eq.(4):
(1)
X
k=
[
xkxk+
(
𝜆+nx
)
Pkxk
(
𝜆+nx
)
Pk
]
(2)
̂
Xk+1
=f
(
X
k
,𝜈
k)
(3)
̂
x
k+1=
n
x
i=0
wi
̂
Xk+1,
i
̂
P
k+1=
2n
x
i=0
wi
(
̂
Xk+1,îxk+1
)(
̂
Xk+1,îxk+1
)T
Fig. 2 UKF roadmap
Journal of Control, Automation and Electrical Systems
1 3
In the measurement prediction step, each generated sigma
point is inserted in the UKF nonlinear measurement model
given in Eq.(5) to produce the matrix of the predicted meas-
urement sigma points, which has an
n×nx
dimension.
In the next step, the predicted measurement mean and
covariance matrices are calculated from the predicted sigma
points as well as the measurement noise covariance matrix
R as given in Eq.(6):
where
wi
’s are the sigma-point weights that are determined
using Eq.(4),
Sk
is the measurement covariance matrix, and
E{.}
is the expected value of the measurement white noise
𝜔k
, which is the Gaussian distribution (
N
) with zero mean
and covariance matrix
R
.
The final step is the UKF state update, where the UKF
gain matrix (
K
) is calculated as in Eq.(7) using the calcu-
lated cross-correlation matrix (
T
) between the sigma points
in the state space and the measurement space. The gain is
used to update the UKF state vector (
x
) as well as the state
covariance matrix (
P
).
4 The Road Object Model
The state of the road object (Schubert etal. 2008) is deter-
mined by the five variables grouped into the state vector
x
shown in Eq.(8), where
px
and
py
are the object position
in the x-axis and y-axis, respectively, as shown in Fig.3,
v
(4)
w
i=
𝜆
𝜆+nx
,i=0
w
i=1
2
(
𝜆+n
x)
,i=1n
x
(5)
̂
Zk+1
=h
(̂
X
k+1)
(6)
̂z
k+1=
n
x
i=0
wi
̂
Zk+1,
i
S
k+1=
2n
x
i=0
wi
(
̂
Zk+1,îzk+1
)(
̂
Zk+1,îzk+1
)
T
+
R
R
=E
{
𝜔
k
.𝜔
T
k}
(7)
Tk+1=
2n
x
i=0
wi
(
̂
Xk+1,îxk+1
)(
̂
Zk+1,îzk+1
)
T
K
k+1=Tk+1S1
k+1
xk+1=̂xk+1+Kk+1
(
̂zk+1zk+1
)
Pk+1
=
̂
P
k+1
K
k+1
S
k+1
KT
k+1
.
is the magnitude of object velocity derived from its x and
y components,
vx
and
vy
, respectively,
𝜓
is the yaw angle
(object orientation) and
̇𝜓
is the rate of change of the object-
yaw angle.
The nonlinear
xk+1
=f
(
x
k
,𝜈
k)
difference equation that
describes the motion model of the object is derived based
on the state vector
x
and presented in Eqs.(9)–(11).
(8)
x
=
px
py
v
𝜓
̇𝜓
,v=v2
x
+v2
y,𝜓=tan1v
y
v
x
(9)
x
k+1=xk+
v
k
̇𝜓 k
sin
𝜓k+̇𝜓 kΔt
sin
𝜓k

vk
̇𝜓 kcos 𝜓k+̇𝜓 kΔt+cos 𝜓k
0
Δt
0
+𝜈
k
(10)
𝜈
k=
1
2(Δt)
2
cos
𝜓k
.𝜈a,k
1
2(Δt)2sin 𝜓k.𝜈a,k
Δt.𝜈a,k
1
2(Δt)2.𝜈̈𝜓 ,k
Δt.𝜈
̈𝜓 ,k
Fig. 3 A road object motion model
Journal of Control, Automation and Electrical Systems
1 3
where
Δt
is the time difference between two consecutive
samples,
̈𝜓
is the yaw acceleration,
a
is the longitudinal
acceleration,
𝜈a,k
is the longitudinal acceleration noise at
sample
k
with a standard deviation
𝜎2
a
,
𝜈̈𝜓 ,k
is the yaw accel-
eration noise at sample
k
with a standard deviation
𝜎2
̈𝜓
.
5 GPS/IMU Fusion
This work in paper uses an IMU unit (an integrated acceler-
ometer, gyroscope and magnetometer) and GPS to determine
initial orientation and position of the ego-car. The following
will shed the light on the developed IMU/GPS fusion algo-
rithm for this purpose:
1. Setting the sampling rates: the accelerometer and gyro-
scope run at relatively high sample rate (160Hz). Con-
versely, the GPS and the magnetometer run at relatively
low sample rate (1Hz). In this fusion algorithm, the
magnetometer and GPS samples are processed together
at 1Hz, and the accelerometer and gyroscope samples
are processed together at 160Hz; however, only one out
of 160 samples used in the fusion process.
2. Setting up the fusion filter: The fusion filter uses an
extended Kalman filter (EKF) (Farag 2020) to track
orientation (as a quaternion), velocity, position, sensor
biases and the geomagnetic vector.
3. Prediction step: This step takes the accelerometer and
gyroscope samples from the IMU as inputs. This method
predicts the states one-time step ahead based on the
accelerometer and gyroscope. The error covariance of
the EKF is updated here.
4. Update step: This step updates the EKF states based on
GPS samples by computing a Kalman gain that weights
the various sensor inputs according to their uncertainty
(Farag 2021). An error covariance is also updated here,
this time using the Kalman gain as well.
5. The above procedure is implemented at a low update
rate (1Hz), which makes the results prone to signifi-
cant localization errors; therefore, the output of the
IMU + GPS fusion is used as an initialization for the
RT_MCL algorithm proposed in this paper.
(11)
Δt=t
k+1
t
k
𝜈
a,kN0, 𝜎2
a
𝜈
̈𝜓 ,kN
0, 𝜎2
̈𝜓
6 Sensor Fusion Using UKF
The processed lidar measurement vector includes the
moving object centroid position (
px
and
py
) in Cartesian
coordinates (Eq.(12)), while the radar measurement vec-
tor includes the moving object centroid position (
𝜌
,
𝜑
) and
radian velocity (
̇𝜌
) in polar coordinates as represented by
Eq.(13). The mapping function that specifies how lidar Car-
tesian coordinates got mapped to the radar polar coordinates
is given as well in Eq.(14).
According to Fig.4, each sensor has its own prediction
update scheme; however, both sensors share the same state
prediction scheme. The belief about the object’s position and
velocity is updated asynchronously each time the measure-
ment is received regardless of the source sensor.
In Fig.4, after computing the elapsed time between con-
secutive sensor reading (
Δt
), the sigma points (
Xk
) are gen-
erated using Eq.(1), and then, a next time-step prediction
for sigma points (
̂
Xk+1
) is carried out using Eq.(2) while
employing the moving object nonlinear motion model given
in Eq.(9). The resulted predicted sigma points are then used
to compute the state-mean (
̂xk+1
) and covariance (
̂
Pk+1
)
matrices using Eq.(3).
Then, the fusion technique thus branches into two direc-
tions based on the source of the last sensor data measure-
ment (Farag 2020). If the source is a radar, and employ-
ing the nonlinear radar measurement model (Eq.(13)), the
predicted measurement sigma points (
̂
Zk+1
) are calculated
from the predicted state sigma points (
̂
Xk+1
) using Eq.(5).
Then, the predicted measurements (
̂zk+1
) and their covari-
ance matrix (
Sk+1
) are calculated based on Eq.(6) using
the measurement noise covariance matrix
Rradar
given in
Eq.(15). Then,
̂xk+1
and
̂zk+1
are used to compute the cross-
correlation matrix (
Tk+1
) between the sigma points in the
state space (
̂
Xk+1
) and the measurement space (
̂
Zk+1
) as in
Eq.(7). Based on this cross-correlation matrix, the Kalman
filter gain (
Kk+1
) is then calculated and used to compute the
updated object’s state vector (
xk+1
) and covariance matrix
(
Pk+1
) as shown by Eq.(7).
(12)
z
lidar =
px
py
,zradar =
𝜌
𝜑
̇𝜌
(13)
h
(x)=
𝜌
𝜑
̇𝜌
=
p2
x+p2
y
arctan py
px
pxvx+pyvy
p2
x+p2
y
(14)
px=𝜌cos (𝜑),py=𝜌sin (𝜑)
Journal of Control, Automation and Electrical Systems
1 3
where
𝜎𝜌
is the noise standard deviation of the object radial
distance,
𝜎𝜑
is the noise standard deviation of the object
heading (bearing),
𝜎̇𝜌
is the noise standard deviation of the
object-yaw rate.
If the measurement data source is a lidar sensor, and
employing the linear lidar measurement model (
Hlidar
)
shown in Eq.(16), the predicted measurement sigma points
(
̂
Zk+1
) are directly calculated from (
̂
Xk+1
). Then, the pre-
dicted measurements (
̂zk+1
) and their covariance matrix
(
Sk+1
) are calculated based on Eq.(6) using the measurement
noise covariance matrix
Rlidar
given in Eq.(16). Then,
̂xk+1
and
̂zk+1
are used to compute the cross-correlation matrix
(
Tk+1
) between the sigma points in the state space (
̂
Xk+1
)
and the measurement space (
̂
Zk+1
) as in Eq.(7). Based on
this cross-correlation matrix, the Kalman filter gain (
Kk+1
)
is then calculated and used to compute the updated object’s
state vector (
xk+1
) and covariance matrix (
Pk+1
) as shown by
Eq.(7) (Schubert etal. 2008).
where
𝜎
p
x
and
𝜎
p
y
are the noise standard deviations for the
object
x
and
y
positions, respectively.
7 Clustering andData Association
Figure4 presents the lidar and radar data fusion technique
employing the UKF. The UKF produces point clouds that
provide information about objects in the ego-car surround-
ing. Clustering is a key tool to extract these objects’ infor-
mation (geometry and poses) from UKF point clouds. The
objective of the clustering is to represent each object in the
form of a source-point model to reduce computational cost
and memory requirements.
UKF data processing is performed using clustering and
association algorithms. DBSCAN (Sander etal. 1996) is
an unsupervised clustering algorithm that groups together
data points if the density of the points is high enough. It
requires two parameters to determine the density. The first
parameter is ε describing the radial distance from a point
p, that is being evaluated. The second parameter is minPts,
which is the least number of detections that have to be within
a distance ε from p, including p itself, to form a cluster.
(15)
R
radar =
𝜎
2
𝜌
00
0𝜎2
𝜑0
00𝜎2
̇𝜌
(16)
H
lidar =
[10000
01000
]
R
lidar =E
𝜔.𝜔T
=
𝜎2
px
0
0𝜎2
py
By choosing ε and minPts, it is then possible to decide the
necessary density for a group of points to form a cluster;
however, these fixed parameters are not convenient if various
types and topologies of road objects need to be detected. As
an improvement to this algorithm, GB-DBSCAN is intro-
duced (Dietmayer etal. 2012). It works in the same manner
as DBSCAN but does not have fixed parameters. Instead,
a polar grid is created according to the radial and angu-
lar resolution of the sensor. Instead of looking at a circu-
lar search area with a fixed radius, GB-DBSCAN can use
a more dynamic, elliptic, search area. The most distinctive
feature of a pole-like object is that the density of point cloud
at its position is far greater than its surrounding.
While GB-DBSCAN is used for coarse clustering, the
RANSAC (Fischler and Bolles 1981) is used to fine-tune
the clustering and associate geometrical shape proposals to
potential coarse clusters. Therefore, RANSAC here is used
to fit a circle (which represents poles shape) to all points (
N
)
in each cluster. After successful fitting, RANSAC can extract
the pole parameters (center (
̂xc,̂yc
) and radius (
̂r
)) from the
fitted circles by solving the following equation:
Establishing a matching link between the detected pole-
like objects in the lidar/radar data (the source) and the pole-
like landmarks in the reference map (the target) is referred
to as the data association step. This step is necessary for the
proper execution of the particle filter. The data association
step is carried out using the iterative closest point (ICP)
algorithm. Instead of working on the whole point clouds in
both the source and target as per the standard application
of this algorithm (Suhr etal. 2017), only the centroids of
pole-like objects are considered for the matching process.
This way a huge memory and processing time will be saved.
The ICP algorithm works by iterating a two-step proce-
dure until convergence. The first step is matching each point
in a set of source points, X (lidar/radar data), to the closest
point in a set of target points, Y (reference map), and the sec-
ond step is finding the optimal transform between the source
and target sets, given the assignments. Matching points by
distance is a computationally efficient operation if a k-d tree
data structure is used to store Y. Here, the points in X are
denoted as
xi
and the matched (closest) point from
xi
in Y is
denoted as
yi
. The basic 2D ICP version minimizes the sum
of squared distances between source and target points to find
the rotation angle
𝜑
, encoded by a rotation matrix
R(𝜑)
and
the translation t.
(17)
min{
1
N
N
i=1[(
xîxc
)
2
+
(
yîyc
)
2
̂r
]2}
(18)
min
𝜑,t
{N
i=
1(
yi
(
R(𝜑)xit
))
T
(
yi
(
R(𝜑)xit
))}
Journal of Control, Automation and Electrical Systems
1 3
To perform the data association, both observations and
landmarks in the reference map should have the same coor-
dinate system. Observations in the ego-car coordinate sys-
tem (
xc
and
yc
) can be transformed into map coordinates
xm
and
ym
by ratifying them through a homogenous transfor-
mation matrix shown in Eq.(19) that performs rotation and
translation using map particle/ego-car coordinates (
xp
and
yp
), and the rotation angle
𝜃
.
(19)
xm
ym
1
=
cos 𝜃sin 𝜃xp
sin 𝜃cos 𝜃yp
0 01
×
xc
yc
1
.
8 Particle Filter Overview
Particle filtering uses a finite set of particles to represent
the posterior distribution
bel(
x
t)
of some stochastic pro-
cess given noisy and/or partial observations p
(
z
t|
x
t)
. It is
an approximate realization of the recursive Bayesian filter
stated in Eq.(20) where
𝜁
is a normalization factor.
Typically, the number of particles,
M
, should be large
enough to represent the belief
bel(
x
t)
accurately to some
extent. The set of particles are denoted at time step t as
Each particle x
[i]
t
is a hypothesis about the actual state
at time t. Table1 describes a simple implementation of the
particle filter, and Fig.5 depicts the whole flowchart. The
recursive procedure is performed whenever a measurement
(20)
bel(
x
t)
𝜁p
(
z
t|
x
t)
bel
(
x
t1)
(21)
𝜒t
=
{
x
[i]
t|
1iM
}
Processed
Sensor Data
Initialize UKF
Matrices
First
Measurement?
Initialize state x
and covariance
matrices
Compute
elapsed time Δt
Generate Sigma
Points
Use Δt in Sigma
Points Prediction
Radar or
Lidar?
Predict Measurement Mean
and Covariance Matrices
Compute Cross Correlations of
Sigma points in state and
measurement spaces
Lidar Radar
Yes No
Predict
Update
Compute State Mean and
Covariance Matrices
Compute Kalman Gain,
State Update, and
Covariance Matrix
Update
Compute Predicted
Measurement Sigma
Points
Predict Measurement
Mean and Covariance
Matrices
Fig. 4 Lidar and radar data fusion using UKF
Journal of Control, Automation and Electrical Systems
1 3
update (
zt
) along with a new set of odometry data (
ut
)
becomes available.
The prediction step is implemented by the loop at line
2. One-state hypothesis (
x[m]
i
) is generated for each particle
based on its current state and the state transition distribu-
tion p
(
x
t|
u
t
,x
[m]
t1)
, which is computed using the particle (car)
motion model described in Sect.4. An importance factor
(weight)
w[i]
t
is calculated or updated for each newly gener-
ated hypothesis using a multivariate Gaussian probability
density function for each observation and combines the
likelihood of all the observations by taking their products
as given in Eq.(22):
where
z[t]
i
is the
ith
landmark observation for particle
m
at
step
t
,
𝜇[t]
i
is the predicted (mean) measurement for the land-
mark corresponding to the
ith
observation at step
t
,
𝛴
is the
(22)
w
[m]
t=
N
i=1
exp
1
2
z[t]
i𝜇[t]
i
T𝛴1
z[t]
i𝜇[t]
i
2
𝜋𝛴
covariance matrix of the measurements and
N
is the total
number of measurements for one particle.
Afterward, in lines 3, new
M
particles are resampled
from the previous set of particles (
̄𝜒 t
) proportionally to
their importance factors (
𝛼
w
[i]
t
) that have been determined
in line 2.ii, where
𝛼
is a normalization factor. This resam-
pling yields
𝜒t
, the updated posterior approximation. Note
that
𝜒t
generally will contain duplicates, taking the places of
particles that were not drawn in line 3.i as they have evolved
into less likely hypotheses.
To check the convergence of the particle filter, the
weighted-average error (
Errorweighted
) of all the particles is
used as a convergence indicator. The
Errorweighted
is com-
puted as given in Eq.(23), by simply calculating the root-
squared error between each particle state
pi
and the ground
truth
gi
=
[
x
gt
i
y
gt
i
𝜃
gt
i]
and multiplying it by the particle’s
weight, and then sum the product for all particles and divide
the summation by the aggregated particle weights.
9 Implementation oftheRT_MCL
Both UKF and PF are implemented using the high-perfor-
mance language GCC C ++ (GCC 2020) on Ubuntu Linux
operating system (Ubuntu Linux 2020). This combination is
fitted for the required real-time performance. A C ++ numer-
ical solver, matrix, and vector operations package “Eigen”
(2020) is used to numerically calculate the object model and
effectively perform the predict and update steps.
The object motion model described by Eqs.(9)–(11)
includes several noise parameters that need to be carefully
(23)
Error
weighted =
M
i=1wi
pigi
M
i=1
w
i
.
Table 1 Particle filter pseudo-code
Fig. 5 Particle filter algorithm
flowchart
EgoCar Initial
Pose Yaw Rate Velocity Pole Landmarks
Positions
Feature
Measurements
Lidar/Radar
UKF Sensor
Fusion
Reference Map
GPS+IMU
Fusion Odometry
Initialization Prediction
Step
Update Step
(Weights) Resample
Start
Sensors
Noise
Sensors
Noise
Map
Uncertainty
Journal of Control, Automation and Electrical Systems
1 3
set. Table2 presents the fine-tuned parameters for both UKF
and PF.
The UKF design is considered consistent if the estima-
tion error is unbiased, i.e., has zero mean, and that the actual
mean square error of the filter matches the filter-calculated
state covariance. As a measure of filter consistency, the time-
average normalized innovation squared (NIS) (Piché 2016)
can be used to fine-tune the noise parameters. The metric,
described by Eq.(24), is used to calculate the
NIS
value at
each sample
k
and then averaging these values (
NISAverage
)
over a moving window of measurements of length
N
.
The proper initialization of the UKF is very crucial to its
subsequent performance (Zhao and Huang 2017). The main
initialized variables are the state estimate vector (
x
) and its
estimate covariance matrix (
P
). The first two terms of the
state vector
x
given by Eq.(8) are
px
and
py
, which are sim-
ply initialized using the first received raw sensor measure-
ment. For the other three terms of the state vector, intuition
augmented with some trial and error is used to initialize
these variables as listed in Table3. The state covariance
matrix is initialized as a diagonal matrix that contains the
covariance of each variable estimate (Eq.(25)).
To check the performance of the UKF, in terms of how
far the estimated results from the true results (ground truth),
the root-mean-squared error (RMSE) given in Eq.(26) is
used. The metric is calculated over a moving window of
measurements of length
N
(Zhao and Huang 2017). x
est
k
is
the estimated state vector of the UKF, and
xtrue
k
is the true
state vector supplied by the simulator or given as training
data during the UKF design phase.
(24)
NISk
=
(
z
k+1
̂z
k)T
S1
k(
z
k+1
̂z
k)
NIS
Average =1
N
k=N
k=1
NIS
k
(25)
P
=diag
(
𝜎2
̂p
x
,𝜎2
̂p
y
,𝜎2
̂v,𝜎2
̂𝜓 ,𝜎2
̂
̇𝜓
)
(26)
RMSE
=
1
N
k=N
k=1
xest
k
xtrue
k
2
The initialization of the PF is very crucial as well for its
performance. Therefore, it is carried out as follows:
(a) The number of particles is set to
M=50
. In the litera-
ture (Lu and Milios 1997; Farag 1998), this number
usually ranges from 100 to 1000; however, it is a com-
promise between accuracy and computational speed.
Several experimental trials are carried out and show
that 50 produces the required accuracy and real-time
performance.
(b) The PF state vector (particles poses) is initialized from
the output of the GPS + IMU fusion (
p
x
GPS ,p
y
GPS ,𝜃
GPS
)
as follows:
where p
[m]
x
, p
[m]
y
and
𝜃[m]
Particle
represent particle
m
initial-
ized pose.
𝜎
x
GPS
,
𝜎
y
GPS
and
𝜎
𝜃
GPS
represent the GPS + IMU
reading noise standard deviations.
𝜎xartificial
,
𝜎yartificial
and
𝜎
𝜃
artificial
represent the artificial noise added to each initial
particle position for the purpose of randomization that
helps in the conversion of the PF (Lu and Milios 1997).
The values of these parameters are listed in Table4.
(c) The particle importance weights are all initialized with
the uniform distribution
w
[m]=
1
M
.
(d) As RT_MCL is using probabilistic maps, each pole-like
landmark is represented by Gaussian distributions
N(
pxPole ,𝜎2
x
Pole )
and N
(
pyPole ,𝜎2
y
Pole )
to model the uncer-
tainties in their positions. The values of the standard
deviations
𝜎
x
pole
and
𝜎
y
pole
are listed in Table4.
(27)
p[m]
xN
(
pxGPS ,𝜎2
xGPS
+𝜎2
xartificial
)
p[m]
yN(pyGPS ,𝜎2
yGPS
+𝜎2
yartificial )
𝜃[m]
Particle N
(
𝜃GPS,𝜎2
𝜃
GPS
+𝜎2
𝜃
artificial )
Table 2 The UKF and object model noise parameters
Parameter UKF/PF Parameter UKF
𝜎a
m/s21.0
𝜎
p
y
(lidar) m 0.15
𝜎̈𝜓
rad/s20.6
𝜎𝜌
(radar) m 0.3
𝜎̇𝜓
rad/s 0.06
𝜎𝜑
(radar) rad 0.03
𝜎
p
x
(lidar) m 0.15
𝜎̇𝜌
(radar) m/s 0.3
Table 3 Initialization of UKF states
Parameter UKF Parameter UKF
px
m 1st raw x-reading
py
m 1st raw y-reading
v
m/s 0.0
𝜓
rad 0.0
̇𝜓
rad/s 0.0
𝜎̂px
m 1.0
𝜎
̂p
y
m 1.0
𝜎̂v
m/s
1000
𝜎̂𝜓
rad
1000
𝜎
̂
̇𝜓
m/s2
1000
Table 4 Initialization of the particle filter
Parameter PF Parameter PF
𝜎xGPS
0.3m
𝜎xartificial
10m
𝜎
y
GPS
0.3m
𝜎
y
artificial
10m
𝜎𝜃GPS
0.01rad
𝜎𝜃artificial
0.05rad
𝜎
x
pole
0.3m
𝜎
y
pole
0.3m
Journal of Control, Automation and Electrical Systems
1 3
To check the performance of the PF, in terms of how far
the estimated poses from the ground truth, the cumulative
mean absolute error for each pose variable given in Eq.(28)
is used. The metric is calculated over a moving window of
measurements of length
N
.
xbest
i
,y
best
i
,𝜃
best
i
are the estimated
pose variables of the PF, and x
gt
i
,y
gt
i
,𝜃
gt
i
are the ground-truth
variables supplied by the simulator or given as training data
during the PF design phase.
10 Testing andEvaluation Results
Extensive trial-and-error attempts are used to tune the hyper-
parameters of the RT_MCL. However, to be more consistent
and accurate, numerical key performance indicators (KPIs)
are constructed and coded as in Eqs.(24), (26) and (28) to
evaluate the performance of the localization technique under
the given set of hyper-parameters.
Several test tracks have been used to evaluate the per-
formance of the RT_MCL under different sets of hyper-
parameters in an iterative tuning process (Farag 2019; 2020).
The test cases and data are generated using the open-source
(28)
Xerror =
1
N
N
i=1|
|
|xbest
i
xgt
i|
|
|
Yerror =
1
N
N
i=1|
|
|ybest
i
ygt
i|
|
|
Yaw
error =
1
N
N
i=1|
|
|
𝜃best
i
𝜃gt
i
|
|
|
.
urban CARLA driving simulator (2020). An example of
these test tracks is shown in Fig.6. This track is 754 m long
with several curvatures and includes 42 pole-like landmarks
to emulate urban driving.
Table5 presents the testing results of the lidar/radar
fusion algorithm that uses the UKF. The performance eval-
uation is carried out on test tracks to detect road objects
(cyclists, cars, pedestrians and pole-like landmarks). The
RMSE KPI (Eq.(26)) is used to evaluate the UKF perfor-
mance based on the five state variables:
px
,
py
,
vx
,
vy
, and
𝜓
. The KPI is comparing each estimated state variable to its
ground-truth value and finding the error. The lower the value
of the KPI, the better the performance.
To assess the significance of the fusion between lidar and
radar in tracking, the UKF is tested in one time with meas-
urements from lidar-alone and another time with measure-
ments from radar-alone. The results reported in Table5 show
how fusion makes the difference and substantially improves
accuracy. The estimation of all state variables is spectacu-
larly improved. For example, the RMSE of x-position (
px
)
estimation is reduced by 60% compared to “lidar-alone”
and 60% compared to “radar-alone” estimations. Moreover,
Fig. 6 Ego-car localization
results in the test track
Table 5 Performance evaluation of the UKF
State var Cyclist Car Pedestrian Pole
px
0.0648 0.1857 0.0652 0.0324
py
0.0809 0.1899 0.0605 0.0433
vx
0.1452 0.4745 0.5332 0.0032
vy
0.1592 0.5075 0.5442 0.0054
𝜓
0.0392 0.2580 0.2075 0.0075
Journal of Control, Automation and Electrical Systems
1 3
the RMSE of x-velocity (
vx
) estimation is reduced by 30%
compared to “lidar-alone” and 26% compared to “radar-
alone” estimations. The NIS values are calculated as well
for “lidar-alone” and “radar-alone” cases to test the consist-
ency of the UKF in their cases. The reported values show
that fusion significantly improves the consistency. The NIS
values that exceed the 95% threshold have been reduced by
31% compared to the “lidar-alone” and 38.5% compared to
the “radar-alone” ones.
Figures6, 7 and 8 present an example of the testing and
simulation results of the MCL algorithm that uses the com-
bination of the UKF and PF while employing a probabilistic
reference map. The performance evaluation is carried out on
test tracks to detect road pole-like landmarks using the UKF
and using these detections by the GB-DBSCAN and the PF
to localize the ego-car on the global map. In the mentioned
figure, both the estimated pose values and the ground truth
are drawn on top of each other due to the small reported
Fig. 7 Ego-car orientation estimation in the test track
Fig. 8 The ego-car speed and yaw rate during driving three laps in the test track
Journal of Control, Automation and Electrical Systems
1 3
errors as displayed in Table6. The RMSE KPI (Eq.(28)) is
used to evaluate the PF performance based on the three pose
variables:
x
,
y
and
𝜃
. The KPI is comparing each estimated
state variable to its ground-truth value and finding the error
(Farag etal. 1997). The lower the value of the KPI, the better
the performance. Several experiments have been carried out
using the PF with different numbers of particles to optimize
its real-time performance. The results in Table7 show that
the number of particles can go down to “25,” and the PF can
still produce good results, fast execution time with robust
convergence (while “15” is divergent). However, to ensure
more robustness, the number of particles of “50” is consid-
ered the most appropriate selection with a delicate balance
between the achieved accuracies, real-time performance and
convergence. It is clear from the table that above 50 not
much improvements in precision are achieved and a margin
of safety is required to enhance robustness; therefore, 25 is
not selected.
The PF performance is also studied under various uncer-
tainties of the reference map. The uncertainties are modeled
by the standard deviation of the positions of the pole-like
landmarks stated in the reference map. Table8 shows that
an accurate reference map is crucial to the pose estimation
of the ego-car; however, the RT_MCL shows that it is able
to handle uncertainties up to 2.0 m (
2𝜎pole
) in map-poles
poses and still can localize the ego-car with less than 30cm
of error.
Figure9 shows the convergence performance of the par-
ticle filter during the initialization phase using 50 particles.
The error stabilizes after 100 time steps. Figure10 also
shows the performance of the particles’ weights for one lap
travel on the test track. The figure shows that the best weights
are significantly higher the average weights which a sign of
convergence robustness. Moreover, there is a kind of inverse
relationship between the number of detected landmarks and
the value of the best and average weight. Equation(22) can
be rewritten in a more streamlined form in Eq.(23). The
later equation shows that the weight values are the product of
the likelihood of the pole-landmark observation represented
by a multivariate Gaussian probability density function. The
higher the number of observed landmarks, the more the
chance that some of these landmarks have very small likeli-
hood values that bring the whole product down. After many
experiments, it has been found that the reasonable value for
the number of detected landmarks at each time step for the
robust running of the RT_MCL lies in the range of 4 12
landmarks.
The many experimentations of the RT_MCL pipeline
proved to be fast enough in execution to be used in real time.
Using an Intel Core i5 with 1.6GHz and 8GB RAM which
is a very moderate computational platform (Nagiub and
Farag 2013), the following measurements (Table9) are col-
lected for the execution of the RT_MCL for a single ego-car
pose estimation based on 12 pole-like landmarks (Fig.11).
Table9 shows it takes around 8.2ms to execute the whole
pipeline for single-pose estimation. Most localization func-
tions run at 10Hz to 30Hz speed. Therefore, the proposed
RT_MCL satisfies comfortably even the upper end of this
requirement.
By considering that the lidar/radar measurements are col-
lected at approximately 30 fps rate (Yurtsever etal. 2020),
then the measurement cycle is 33.3ms which is large enough
to be utilized for considering more than 50 landmark detec-
tions using UKF according to the data in Table9.
(23)
w
[m]
t=
N
j=1
exp
z[t]
xj𝜇[t]
xj
2𝜎2
xpole
z[t]
yj𝜇[t]
yj
2𝜎2
ypole
2𝜋𝜎x
pole
𝜎y
pole
Table 6 Sensor fusion evaluation of the UKF (bicycle track)
Lidar + Radar Lidar only Radar only
RMSE—
px
0.0648 0.1612 0.2031
RMSE—
py
0.0809 0.1464 0.2539
RMSE—
vx
0.1452 0.2082 0.1971
RMSE—
vy
0.1592 0.2129 0.1871
RMSE—
𝜓
0.0392 0.0540 0.0480
NIS—Average 2.2797 1.6941 2.6576
NIS—Min 0.0012 0.04874 0.11309
NIS—Max 14.749 12.997 12.183
NIS > 95% Threshold 2.2% 3.2% 5.2%
Table 7 The particle filter with different numbers of particles (Color
table online)
Table 8 Effect of the landmark standard deviation
𝜎
x
pole
𝜎
y
pole
x-error y-error Yaw error
0.3 0.3 0.1143 0.1154 0.0040
0.5 0.5 0.1730 0.1633 0.0057
1.0 1.0 0.2926 0.2736 0.0098
Journal of Control, Automation and Electrical Systems
1 3
11 Conclusion
In this paper, a real-time Monte Carlo localization (RT_
MCL) method for autonomous cars is proposed, imple-
mented and described in detail. The method uses a tai-
lored unscented Kalman filter to perform data fusion for
the mounted lidar and radar devices on the ego-car. The
raw data of the lidar/radar are getting fused using the UKF
and then getting clustered using both GB-DBSCAN and
RANSAC algorithms to produce the detected pole-like land-
marks’ poses. These detected landmarks are then associated
with the ones in the supplied reference map using the ICP
Fig. 9 Performance during the particle filter initialization
Fig. 10 Performance of particle weights during driving one lap
Journal of Control, Automation and Electrical Systems
1 3
algorithm. Then, a tailored particle filter is designed to pro-
duce estimated ego-car poses measured on the global map
coordinates.
The RT_MCL method is fully implemented using GCC
C++ in addition to advanced math libraries to optimize its
real-time performance. The design steps, initialization, and
tuning of both the UKF and the PF are described in detail.
The initialization and consistency evaluation of both filters
have been explained as well. The generic object motion
model employed by both UKF and PF is comprehensive and
is described using five state variables.
The validation results show that the proposed method is
reliably able to detect pole-like landmarks and to estimate
the ego-car pose with an 11-cm mean error in real time using
only 50 particles. The measured throughput (execution time)
using an affordable CPU proved that the RT_MCL pipe-
line is very suitable for real-time ADAS or self-driving car
localization.
Both UKF and PF have shown that the RT_MCL pipeline
can run at 30Hz while able to handle up to 50 landmark
detections. The reference map is represented in a probabilis-
tic form by representing each landmark position by its mean
centroid and standard deviation. The RT_MCL shows it can
handle uncertainties up to 2.0 m in the landmark centroids
and still can localize the ego-car with less than 30 cm of
error.
In the future, it is intended to: perform further experimen-
tations, better evaluate and enhance the robustness and the
convergence of the proposed technique including on-road
experimentation, add a front-camera to the presented fusion
technique and further investigate the benefits it will add to
the overall localization performance. Furthermore, it will
augment the RT_MCL with other road objects like guard-
rails, sidewalks, curbs, intersection features, etc.
Table 9 RT_MCL execution time for a single-pose estimation
Task Exec. time (ms)
UKF state estimation for 12 landmarks 12 × 0.439
GB-DBSCAN + RANSAC + ICP clustering and
data association
0.835
PF pose estimation 0.739
Control code overhead—20% 1.368
Total 8.210
Fig. 11 The number of detected pole-like landmarks during one lap
Journal of Control, Automation and Electrical Systems
1 3
References
Alrousan, Q., Alzu’bi, H., Pfeil, A., & Tasky, T. (2020). Autonomous
vehicle multi-sensors localization in unstructured environment.
WCX SAE World Congress Experience, SAE International, USA,
April 14, 2020.
CARLA driving simulator. (2020). Retrieved from, https ://carla .org/,
August 16.
Carlevaris-Bianco, N., Ushani, A. K., & Eustice, R. M. (2015). Univer-
sity of Michigan north campus long-term vision and lidar dataset.
International Journal of Robotics Research, 35(9), 1023–1035.
Dietmayer, K., Kellner, D., & Klappstein, J. (2012). Grid-based dbscan
for clustering extended objects in radar data. In IEEE intelligent
vehicles symposium, Alcala de Henares, Spain, June 2012.
Eigen. (2020). Retrieved from, http://eigen .tuxfa mily.org/index
.php?title =Main_Page, March 11, 2020.
Einicke, G. A., & White, L. B. (1999). Robust extended Kalman filter-
ing. IEEE Transactions on Signal Processing, 47(9), 2596–2599.
Farag, W. (1998). Synthesis of intelligent hybrid systems for modeling
and control. Ph.D. Thesis, University of Waterloo, Canada.
Farag, W. (2019a). Cloning safe driving behavior for self-driving cars
using convolutional neural networks. Recent Patents on Computer
Science, 12(2), 120–127.
Farag, W. (2019b). Traffic signs classification by deep learning for
advanced driving assistance systems. Intelligent Decision Tech-
nologies, 13(3), 215–231.
Farag, W. (2019c). Safe-driving cloning by deep learning for autono-
mous cars. International Journal of Advanced Mechatronic Sys-
tems, 7(6), 390–397.
Farag, W. (2019d). Complex trajectory tracking using PID control for
autonomous driving. International Journal of Intelligent Trans-
portation Systems Research, 18, 356–366.
Farag, W. (2020a). A lightweight vehicle detection and tracking tech-
nique for advanced driving assistance systems. Journal of Intel-
ligent & Fuzzy Systems, 39(3), 2693–2710.
Farag, W. (2020b). A comprehensive real-time road-lanes tracking
technique for autonomous driving. International Journal of Com-
puting and Digital Systems, 9(3), 349–362.
Farag, W. (2020c). Road-objects tracking for autonomous driving using
lidar and radar fusion. Journal of Electrical Engineering, 71(3),
138–149.
Farag, W. (2020). Complex-track following in real-time using model-
based predictive control. International Journal of Intelligent
Transportation Systems Research.
Farag, W. (2021). Real-time lidar and radar fusion for road-objects
detection and tracking. In Intelligent decision technologies (IDT),
IOS Press, The Netherlands (vol. 15) (in-press).
Farag, W. A., Quintana, V. H., & Lambert-Torres, G. (1997). Neuro-
fuzzy modeling of complex systems using genetic algorithms.
In IEEE international conference on neural networks (IEEE
ICNN’97) 1 (pp. 444–449).
Farag, W., & Saleh, Z. (2018). Road lane-lines detection in real-time
for advanced driving assistance systems. In International confer-
ence on innovation and intelligence for informatics, computing,
and technologies (3ICT’18), Bahrain, November 18–20.
Farag, W., & Saleh, Z. (2018). Behavior cloning for autonomous driv-
ing using convolutional neural networks. In international confer-
ence on innovation and intelligence for informatics, computing,
and technologies (3ICT’18), Bahrain, November 18–20.
Farag, W., & Saleh, Z. (2019a). An advanced vehicle detection and
tracking scheme for self-driving cars. In 2nd smart cities sym-
posium (SCS’19), IET Digital Library, Bahrain, March 24–26.
Farag, W., & Saleh, Z. (2019b). An advanced road-lanes finding
scheme for self-driving cars. In 2nd smart cities symposium
(SCS’19), IET Digital Library, Bahrain, March 24–26.
Fischler, M., & Bolles, R. (1981). Random sample consensus: A para-
digm for model fitting with applications to image analysis and
automated cartography. Communications of the ACM, 24(6),
381–395.
GCC C ++. (2020). Retrieved from, https ://gcc.gnu.org/, March 11,
2020.
Jo, K., Jo, Y., Suhr, J. K., Jung, H. G., & Sunwoo, M. (2015). Precise
localization of an autonomous car based on probabilistic noise
models of road surface marker features using multiple cameras.
IEEE Transactions on Intelligent Transportation Systems, 16(6),
3377–3392.
Kummerle, J., Sons, M., Poggenhans, F., Kuehner, T., Lauer, M., &
Stiller, C. (2019). Accurate and efficient self-localization on roads
using basic geometric primitives. In 2019 IEEE international con-
ference on robotics and automation, May 2019.
Lee, B.-H., Song, J.-H., Im, J.-H., Im, S.-H., Heo, M.-B., & Jee, G.-I.
(2015). GPS/DR error estimation for autonomous vehicle locali-
zation. Sensors, 15, 20779–20798. https ://doi.org/10.3390/s1508
20779 .
Levinson, J., & Thrun, S. (2010). Robust vehicle localization in urban
environments using probabilistic maps. In 2010 IEEE interna-
tional conference on robotics and automation, May 2010 (pp.
4372–4378).
Lu, F., & Milios, E. (1997). Robot pose estimation in unknown envi-
ronments by matching 2d range scans. Journal of Intelligent and
Robotic Systems, 18(3), 249–275.
Miura, S., Hsu, L. T., Chen, F., & Kamijo, S. (2015). GPS error correc-
tion with pseudorange evaluation using three-dimensional maps.
IEEE Transactions on Intelligent Transportation Systems, 16(6),
3104–3115.
Modsching, M., Kramer, R., & ten Hagen, K. (2006). Field trial on
GPS accuracy in a medium-size city: the influence of built-up.
In 3rd workshop on positioning, navigation and communication
(vol. 2006, pp. 209–218).
Nagiub, M., & Farag, W. (2013). Automatic selection of compiler
options using genetic techniques for embedded software design. In
IEEE 14th international symposium on computational intelligence
and informatics (CINTI), Budapest, Hungary, November 19, 2013.
Osterwood, C., & Noble, F. (2017). Localization for the next generation
of autonomous vehicles. Swift Navigation. Retrieved from, www.
swift nav.com, white paper.
Piché, R. (2016). Online tests of Kalman filter consistency. Interna-
tional Journal of Adaptive Control and Signal Processing, 30(1),
115–124.
Sander, J., Xu, X., Ester, M., & Kriegel, H.-P. (1996). A density-based
algorithm for discovering clusters in large spatial databases
with noise. In Proceedings of the 2nd international confernce
on knowledge discovery and data mining (pp. 226–231), August
1996.
Schaefer, A., Büscher, D., Vertens, J., Luft, L., & Burgard, W. (2019).
Long-term urban vehicle localization using pole landmarks
extracted from 3-D lidar scans. In European conference on mobile
robots (ECMR), Czech Republic, September 4–6, 2019.
Schubert, R., Richter, E., & Wanielik, G. (2008). Comparison and
evaluation of advanced motion models for vehicle tracking. In
11th international conference on information fusion, Cologne,
Germany.
Sefati, M., Daum, M., Sondermann, B., Kreisk, K. D., & Kampker, A.
(2017). Improving vehicle localization using semantic and pole-
like landmarks. In 2017 IEEE intelligent vehicles symposium (pp.
13–19), June 2017.
Suhr, J. K., Jang, J., Min, D., & Jung, H. G. (2017). Sensor fusion-
based low-cost vehicle localization system for complex urban
environments. IEEE Transaction on Intelligent Transportation
Systems, 18(5), 1078–1086.
Journal of Control, Automation and Electrical Systems
1 3
Thrun, S. (2002). Particle filters in robotics. In Proceedings of 18th
annual conference on uncertainty in AI (UAI), Edmonton, Canada.
Ubuntu Linux. (2020). Retrieved from, https ://www.ubunt u.com/,
March 11, 2020.
Wan, E. A., & Van Der Merwe, R. (2000). The unscented Kalman
filter for nonlinear estimation. In IEEE adaptive system for sig-
nal processing, communication, and control symposium, Alberta,
Canada.
Weng, L., Yang, M., Guo, L., Wang, B., & Wang, C. (2018). Pole-based
realtime localization for autonomous driving in congested urban
scenarios. In 2018 IEEE international conference on real-time
computing and robotics (pp. 96–101), August 2018.
Woo, A., Fidan, B., & Melek, W. W. (2019). Localization for autono-
mous driving. In R. Zekavat & R. Michael Buehrer (Eds.), Hand-
book of position location: Theory, practice, and advances (2nd
ed.). Hoboken: Wiley.
Yurtsever, E., Lambert, J., Carballo, A., & Takeda, K. (2020). A survey
of autonomous driving: Common practices and emerging tech-
nologies. Retrieved from, arXiv :1906.05113 v2 [cs.RO] January
6, 2020.
Zarchan, P., & Musoff, H. (2013). Fundamentals of Kalman filter-
ing: A practical approach. ISBN 978-1-62410-276-9 (4th ed.).
Reston, VA: American Institute of Aeronautics and Astronautics,
Incorporated.
Zhao, S., & Huang, B. (2017). On initialization of the Kalman filter.
In 6th international symposium on advanced control of industrial
processes (AdCONIP), Taipei, Taiwan, May 28–31.
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
... This reduces the linearization error of the EKF and helps reduce the sensitivity of both KF and EKF to initial estimates. However, the selection of sigma points can be challenging in some cases and may affect the accuracy of the estimation (Farag, 2021). Finally, the PF can estimate nonlinear systems by representing the probability distribution of the state using a set of particles. ...
... However, it requires a large number of particles to accurately estimate the state, leading to high computational costs. In addition, it can suffer from "sample depletion" when the number of particles is insufficient to represent the probability distribution of the state, causing the filter to fail to track the true state (Carpenter et al., 1999;Farag, 2021;Gustafsson, 2010). ...
Article
Full-text available
This paper describes the design, development, and implementation of a high precision autonomous docking control system for a container truck based on digital twin approach. The digital twin is used to simulate the dynamic behavior of the physical truck and to design the controller by providing a virtual platform to test, validate, and optimize control strategies and algorithms before their deployment in the actual system. To this end, a cascade of a nonlinear observer and an unscented Kalman filter (NLO-UKF) is used to estimate the state variables of the physical truck for point stabilization and orientation controls during the autonomous docking process. The docking motion involves two stabilization problems: point stabilization for smooth motion from the initial configuration to the docking slot, and orientation control to deliver the container truck to the final docking position with a margin of error of 5 cm for position and 0.0087 radians for orientation. The stability of both controllers is investigated, and simulations and experiments are conducted to demonstrate the accuracy of the proposed method in a container terminal environment.
... The best weight at each time step is significantly higher than the average weight. This is considered a good indicator of robustness [57]. Red is the worst-case scenario, green is the best-case, and # is no. ...
... The best weight at each time step is significantly higher than the average weight. This is considered a good indicator of robustness [57]. It is observed that there is an inverse relation between these weight values (best and average) and the number of detected poles. ...
Article
Full-text available
An autonomous car must know where it is with high precision in order to maneuver safely and reliably in both urban and highway environments. Thus, in this paper, a reliable and relatively precise position estimation (localization) technique for autonomous vehicles is proposed and implemented. In dealing with the obtained sensory data or given knowledge about the vehicle’s surroundings, the proposed method takes a probabilistic approach. In this approach, the involved probability densities are expressed by keeping a collection of samples selected at random from them (Monte Carlo simulation). Consequently, this Monte Carlo sampling allows the resultant position estimates to be represented with any arbitrary distribution, not only a Gaussian one. The selected technique to implement this Monte-Carlo-based localization is Bayesian filtering with particle-based density representations (i.e., particle filters). The employed particle filter receives the surrounding object ranges from a carefully tuned Unscented Kalman Filter (UKF) that is used to fuse radar and lidar sensory readings. The sensory readings are used to detect pole-like static objects in the egocar’s surroundings and compare them to the ones that exist in a supplied detailed reference map that contains pole-like landmarks that are produced offline and extracted from a 3D lidar scan. Comprehensive simulation tests were conducted to evaluate the outcome of the proposed technique in both lateral and longitudinal localization. The results show that the proposed technique outperforms the other techniques in terms of smaller lateral and longitudinal mean position errors.
... The work uses odometry and 3D Lidar information fused inside an EKF for performing localization. [82] utilized GNSS, 3D LiDAR, Radar, polelike landmarks, and IMU sensor data for achieving realtime vehicle localization. To successfully fuse those sensors' data and localization, they proposed real-time MCL methods such as the PF and unscented Kalman filter (UKF), and an ICP algorithm for map matching. ...
... Camera, LiDAR, Radar, GNSS, INS EKF-and PF-based localization using a numerical map <20cm (distance root mean squared (DRMS)) [72] LiDAR, camera, GNSS Adaptive MCL using semantic and pole-like landmarks (treed, traffic signs, street lamps) <30cm for LiDARs, <50cm for stereo cameras (RMSE) [76] Camera, GNSS, INS, wheel speed encoder EKF-based localization using traffic lights as landmarks in a traffic light map 0cm to 50cm [79] LiDAR, GNSS, IMU EKF-based localization using landmarks <30cm in the x-direction and <50cm in the ydirection (RMSE) [81] LiDAR, Radar, GNSS, IMU MCL (PF and UKF) using pole-like landmarks 11cm (mean error) [82] LiDAR, GNSS, INS Vertical corner feature-based (buildings as landmarks) vehicle localization using 3D LiDAR and ICP-based scan matching 13.8cm (2D RMS horizontal error) [83] Camera, GNSS, IMU Lane marking aided vehicle localization using the KF <125cm (95th percentile horizontal, lateral, and longitudinal positioning errors) [85] Camera, GNSS, IMU, vehicle odometry PF-based localization using prior HD map RMSE 120cm (best case) [86] LiDAR, GNSS Road marking detection using LiDAR reflective intensity data for MCL 119.82cm longitudinal and 31.19cm lateral [88] LiDAR, GNSS, INS MCL using multilayer LiDAR-based road marking detection <30cm [89] 3D LiDAR, GPS/DR Free-resolution probability distributions map-based localization using the EKF 5.7cm lateral and 17.8cm longitudinal (RMSE) [90] LiDAR, camera, GNSS, INS LiDAR localization using ELM and FFT-based map matching 13.6cm lateral and 22.3cm longitudinal (RMS position error) [91] LiDAR, GNSS, IMU PF-based LIDAR-localization through lane marking detection 22cm (cross-track accuracy) [92] LiDAR, GNSS, IMU Extension of [92] for a third-party map 31cm (best) [93] Camera, GNSS, INS Localization using ICP-based map-matching 89.2cm (mean), 78.1cm (standard deviation) [94] Camera, GNSS Visual localization using traffic signs painted on the road <100cm (MAE) [96] Machine-learning-based localization Camera, Radar PF-based localization by CNN-based camera and Radar fusion and corner detection 66cm maximum, and total RMSE is 18.31cm [107] 18 VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. ...
Article
Full-text available
Research on autonomous vehicles is making significant advances in recent years. To operate an autonomous vehicle safely and effectively, precise localization is essential. This study aims to present state of the art in localization to scientists new to the area. It presents and summarizes works from the field of localization, and suggests a classification for the works. Approaches to localization are mainly divided into three categories: conventional localization, machine-learning-based localization, and vehicle-to-everything (V2X) localization. Conventional localization primarily depends on high-definition (HD) maps or on certain marks, such as landmarks and road marks. Machine-learning-based localization approaches include using neural networks, end-to-end approaches, as well as reinforcement learning for performing, or improving, localization. And lastly, V2X localization methods localize a vehicle by communicating with other vehicles (V2V) or infrastructures (V2I). This study not only presents a bigger picture of the area of localization in autonomous driving, but also presents the potentials and drawbacks of different localization methods. At the end of the review, some research areas that are open for future research are also highlighted.
... However, the Kalman filter is not suitable for nonlinear systems, which are commonly encountered in many real-world applications. To address this issue, the UKF was proposed in the late 1990s as a variant of the Kalman filter [14,15,16]. Unlike the Kalman filter, the UKF is capable of handling nonlinear systems, as it uses sigma points surrounded by the mean and covariance and propagates the sigma points through the nonlinear model to obtain a new approximated mean and covariance from the transformed distribution. ...
Article
Full-text available
The enhancement of accuracy remains a significant concern in state estimation for field mobile robots. This paper introduces a novel sensor fusion method to refine the accuracy of state estimation, employing global navigation satellite systems (GNSS) and encoders. This technique integrates a nonlinear observer for estimating the state of nonlinear systems and an unscented Kalman filter (UKF) to ascertain the robot's state variables using signals from GNSS and encoder-based position sensors. In the proposed method, the nonlinear observer updates sigma points through nonlinear transformation and the UKF optimizes state estimation. Parameters are optimized through the particle swarm optimization method with the objective of minimizing the root mean square error of the state. To demonstrate performance, numerical simulations compare the proposed method with well-known nonlinear estimation algorithms, illustrating its effectiveness in field mobile robot state estimation. The method is further validated through implementation on a real-world mobile robot platform.
... Besides augmenting vehicle positioning systems with the aforementioned supplementary sensors, another crucial aspect of improving positioning performance is the fusion algorithm. To fuse information from multiple sensors, numerous Bayesian filter-based fusion algorithms, such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and Particle Filter (PF) [31][32][33], have also been proposed and refined over time. According to [34], the choice of an adequate process model is crucial for positioning performance. ...
Article
Full-text available
For vehicle positioning applications in Intelligent Transportation Systems (ITS), lane-level or even more precise localization is desired in some typical urban scenarios. With the rapid development of wireless positioning technologies, ultrawide bandwidth (UWB) has stood out and become a prominent approach for high-precision positioning. However, in traffic scenarios, the UWB-based positioning method may deteriorate because of not-line-of-sight (NLOS) propagation, multipath effect and other external interference. To overcome these problems, in this paper, a fusion strategy utilizing UWB and onboard sensors is developed to achieve reliable and precise vehicle positioning. It is a two-step approach, which includes the preprocessing of UWB raw measurements and the global estimation of vehicle position. Firstly, an ARIMA–GARCH model to address the NLOS problem of UWB at vehicular traffic scenarios is developed, and then the NLOS of UWB can be detected and corrected efficiently. Further, an adaptive IMM algorithm is developed to realize global fusion. Compared with traditional IMM, the proposed AIMM is capable of adjusting the model probabilities to make them better matching for current driving conditions, then positioning accuracy can be improved. Finally, the method is validated through experiments. Field test results verify the effectiveness and feasibility of the proposed strategy.
... The designed system is sustainable [40][41][42] for three reasons: first, it reduces human error; second, it can handle a sizable database; and third, the data will be better organized and simpler to handle. Moreover, the security aspects were improved. ...
Article
Full-text available
In this paper, a smart attendance system for students attending school is proposed. The proposed attendance system is based on radio frequency identification (RFID) technology to facilitate automation and convenience. The proposed RFID attendance system (RFID-AS) should be used by school administration to ensure safety for students, as well as for grading and evaluation purposes. After careful study, passive RFID technology is selected to be used by the proposed system because of its reasonable cost. The main components of the system are an RFID tag, an RFID reader, Visual Studio [eXpressApp Framework (XAF) tool], and SQL Server to compare the data from the RFID tag with the students’ database to record attendance automatically. A graphical user interface (GUI) is developed using Visual Studio (XAF tool) to allow parents and school faculty to log in and browse the students’ records. Students will pass the classroom door, which will have an integrated RFID reader device to read their RFID. The paper discusses the design of the solution as well as the testing scenarios
... In order to increase the localization real-time performance and accuracy of AVs, the authors in [16] proposed a real-time localization method based on UKF and PF. The main contribution of this paper is that both the real-time performance and the accuracy is balanced by the proposed optimization method. ...
Article
Full-text available
Vehicle localization plays a crucial role in ensuring the safe operation of autonomous vehicles and the development of intelligent transportation systems (ITS). However, there is insufficient effort to compare the performance and challenges of different vehicle localization algorithms. This paper aims to address this gap by analyzing the comprehensive performance of existing advanced vehicle localization techniques and discussing their challenges. Firstly, we analyze the self-localization methods based on active and passive sensors. The results show that, the light detection and ranging (LiDAR) and vision-based techniques can reach high accuracy. However, they have high computational complexity. And only using the inertial measurement unit (IMU), global positioning system (GPS), radar, and ultrasonic sensors may not realize localization result with high accuracy. Then, we discuss V2X-based cooperative methods and analyze the multi-sensor based localization techniques and compare the comprehensive performance among all methods. Although the artificial intelligence (AI) techniques can effectively enhance the efficiency of vision-based localization algorithms, the high computational complexity still should be considered. In addition, since the IMU, GPS, radar, and ultrasonic sensors have good performance in terms of the availability, scalability, computational complexity, and cost-effectiveness, they can be used as auxiliary sensors to achieve good comprehensive performance through data fusion techniques. Finally, we propose the challenges of different techniques and look forward to future work.
... Particle filtering is one of the typical probabilistic models using particle filtering to estimate the location based on the particles, whose weights are computed according to the constraint of the indoor obstacles such as walls [35]. In an outdoor environment, the particle filtering model integrates the GPS and IMU to determine the target location by reweighting the particles [36], [37]. In the indoor environment, Zee [38] is a classical system to correct the trajectory by reweighting the particles according to the indoor spatial information. ...
Article
Full-text available
In recent years, pedestrian dead reckoning (PDR) is considered a popular localization solution for its reliable accuracy. To tackle its intrinsic problem of accumulative errors, various methods are proposed to refine the trajectory by indoor space information to promote localization accuracy. However, these methods usually require the initial motion state, such as starting location and absolute orientation to infer the coordinates. They may be limited because the initial location has to be determined by other highly precise sensors and required people holding the phone ahead in the fixed position to ensure the ${x}$ -axis is consistent with the heading. In this article, we release the constraints by utilizing the gyroscope and accelerometer sensor sequence to estimate the orientation change and trajectory length, which are insensitive to the initial motion state, to build a hidden Markov model (HMM) model to infer the initial motion and refine the trajectory by spatial information. First, the indoor environment is divided into grids to construct a directed map. Then, we use the gyroscope and accelerometer to collect the signal sequence to represent the rotation angle and length information of the trajectory. Finally, the Viterbi algorithm is used for the inference of the whole trajectory including the initial location and orientation. The experiments show that the initial location and absolute orientation can be predicted as the convergence of the matching candidates when the length of the trajectory increases. And the localization error of the proposed method is superior to the compared method with the benefit of rotation angle and trajectory length.
Article
Full-text available
Road object detection, recognition, and tracking are vital tasks that must be performed reliably and accurately by self-driving car systems to achieve the automation/autonomy. Other vehicles are one of the main objects that the egocar must accurately detect and track on the road. However, deep-learning approaches proved their effectiveness at the expense of very demanding computational power and low throughput. They must be deployed on expensive CPUs and GPUs. Thus, in this work, a lightweight vehicle detection and tracking technique (LWVDT) is suggested to fit low-cost CPUs without sacrificing robustness, speed, or comprehension. The LWVDT is suitable for deployment in both advanced driving assistance systems (ADAS) and autonomous car subsystems. The implementation is a sequence of computer vision techniques fused together and merged with machine learning procedures to strengthen each other and streamline execution. The algorithm details and their execution are revealed in detail. The LWVDT processes raw RGB camera pictures to generate vehicle boundary boxes and tracks them from frame to frame. The performance of the proposed pipeline is assessed using real road camera images and video recordings under different circumstances and lighting/shading conditions. Moreover, it is also tested against the well-known KITTI database, achieving an average accuracy of 87%.
Thesis
Full-text available
Abstract The intelligent information processing performed in humans is now king mimicked in a new generation of adaptive machines as state-of-the-art technology. Inspûed by the functionality of brain nerve cells, artificial neural networks can learn to recognize complex patterns and iùnctions, and based on the biological p ~ c i p l of "survival of the fittest", genetic algorithms are developed as powerful optimization and search techniques. Likewise, fuzzy logic imitates the mechanism of approximate reasoning performed in the human mind, and hence cm reason with Linguistic and imprecise information. Although these intelligent techniques have produced promising results in some applications, certain complex problems cannot be solved using only a single technique. Each technique has particular computation features (e.g. ability to learn, explanation of decisions) that make it suitable for particular problems and not for others. These limitations have motivated the creation of intelligent Izybrid systems where two or more techniques are combined Although there is an increasing interest in the integration of fuzzy logic, neural networks, and genetic algorithms to build intelligent hybrid systems. no systematic synthesis framework has been developed so far. Therefore, the objective of this thesis is to construct an intelligent learning science that incorporates the merits and overcomes the limitations of the three paradigms. The applications considered for the proposed scheme are modeling and control. The generic topology of the system used in this thesis has a transparent structure; its parameters. links, signals, and modules have their own physical interpretations. Moreover, the learning scheme uses task decomposition to identify the systems' parameters. The leaming cask is decomposed into three subtasks (phases). The first phase performs a coarse identification of the systems' numerical parameters using an unsupervised learning (clustering) algorithm. The second phase finds the linguistic association parameters (linguistic rules) using unsupervised as well as supervised learning algorithms. In the thud phase, the numerical parameters are optimized and fine-tuned using supervised learning and search techniques. The performance of the theme is assessed by testing it on two benchmark modeling applications. The results are compared to those of other intelligent modeling approaches to show the performance characteristics of the proposed scheme. The scheme is also assessed by applying it to nonlinear control problems. The synchronous machine voltage regulation and speed stabilization problems have been tackled using the proposed scheme. Several comparative studies have been carried out to show the advantages of the proposed control approach over conventional approaches.
Article
Full-text available
In this paper, based on the fusion of Lidar and Radar measurement data, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The lidar and radar devices are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is used for their data fusion. Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. Therefore, the merits of both sensors are combined using the proposed fusion approach to provide both pose and velocity data for objects moving in roads precisely. The Grid-Based Density-Based Spatial Clustering of Applications with Noise (GB-DBSCAN) clustering algorithm is used to detect objects and estimate their centroids from the lidar and radar raw data. Then, the estimation of the object’s velocity as well as determining its corresponding geometrical shape is performed by the RANdom SAmple Consensus (RANSAC) algorithm. The proposed technique is implemented using the high-performance language C+⁣+ and utilizes highly optimized math and optimization libraries for best real-time performance. The performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. Simulation studies have been carried out to evaluate the performance of the LR_ODT for tracking bicycles, cars, and pedestrians.
Article
Full-text available
In this paper, based on the fusion of Lidar and Radar measurement data, a real-time road-Object Detection and Tracking (LR_ODT) method for autonomous driving is proposed. The lidar and radar devices are installed on the ego car, and a customized Unscented Kalman Filter (UKF) is used for their data fusion. Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. Therefore, the merits of both sensors are combined using the proposed fusion approach to provide both pose and velocity data for objects moving in roads precisely. The Grid-Based Density-Based Spatial Clustering of Applications with Noise (GB-DBSCAN) clustering algorithm is used to detect objects and estimate their centroids from the lidar and radar raw data. Then, the estimation of the object’s velocity as well as determining its corresponding geometrical shape is performed by the RANdom SAmple Consensus (RANSAC) algorithm. The proposed technique is implemented using the high-performance language C++ and utilizes highly optimized math and optimization libraries for best real-time performance. The performance of the UKF fusion is compared to that of the Extended Kalman Filter fusion (EKF) showing its superiority. Simulation studies have been carried out to evaluate the performance of the LR ODT for tracking bicycles, cars, and pedestrians.
Article
Full-text available
In this paper, an advanced-and-reliable vehicle detection-and-tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and-Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). The RT_VDT is mainly a pipeline of reliable computer vision and machine learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main contribution of this paper is the careful fusion of the employed algorithms where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. In addition, the RT_VDT provides fast enough computation to be embedded in CPUs that are currently employed by ADAS systems. The particulars of the employed algorithms together with their implementation are described in detail. Additionally, these algorithms and their various integration combinations are tested and their performance is evaluated using actual road images, and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark with 87% average precision achieved. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions.
Article
Full-text available
In this paper, a comprehensive Model-Predictive-Control (MPC) controller that enables effective complex track maneuvering for Self-Driving Cars (SDC) is proposed. The paper presents the full design details and the implementation stages of the proposed SDC-MPC. The controller receives several input signals such as an accurate car position measurement from the localization module of the SDC measured in global map coordinates, the instantaneous vehicle speed, as well as, the reference trajectory from the path planner of the SDC. Then, the SDC-MPC generates a steering (angle) command to the SDC in addition to a throttle (speed/brake) command. The proposed cost function of the SDC-MPC (which is one of the main contributions of this paper) is very comprehensive and is composed of several terms. Each term has its own sub-objective that contributes to the overall optimization problem. The main goal is to find a solution that can satisfy the purposes of these terms according to their weights (contribution) in the combined objective (cost) function. Extensive simulation studies in complex tracks with many sharp turns have been carried out to evaluate the performance of the proposed controller at different speeds. The analysis shows that the proposed controller with its tuning technique outperforms the other classical ones like PID. The usefulness and the shortcomings of the proposed controller are also discussed in details.
Article
Full-text available
Automated driving systems (ADSs) promise a safe, comfortable and efficient driving experience. However, fatalities involving vehicles equipped with ADSs are on the rise. The full potential of ADSs cannot be realized unless the robustness of state-of-the-art is improved further. This paper discusses unsolved problems and surveys the technical aspect of automated driving. Studies regarding present challenges, highlevel system architectures, emerging methodologies and core functions including localization, mapping, perception, planning, and human machine interfaces, were thoroughly reviewed. Furthermore, many stateof- the-art algorithms were implemented and compared on our own platform in a real-world driving setting. The paper concludes with an overview of available datasets and tools for ADS development.