Content uploaded by Wael A. Farag
Author content
All content in this area was uploaded by Wael A. Farag on Jan 26, 2021
Content may be subject to copyright.
Vol.:(0123456789)
1 3
Journal of Control, Automation and Electrical Systems
https://doi.org/10.1007/s40313-020-00666-w
Real‑Time Autonomous Vehicle Localization Based onParticle
andUnscented Kalman Filters
WaelFarag1,2
Received: 19 May 2020 / Revised: 17 August 2020 / Accepted: 1 November 2020
© Brazilian Society for Automatics--SBA 2021
Abstract
In this paper, a real-time Monte Carlo localization (RT_MCL) method for autonomous cars is proposed. Unlike the other
localization approaches, the balanced treatment of both pose estimation accuracy and its real-time performance is the main
contribution. The RT_MCL method is based on the fusion of lidar and radar measurement data for object detection, a pole-
like landmarks probabilistic map and a tailored particle filter for pose estimation. The lidar and radar are fused using the
unscented Kalman filter (UKF) to provide pole-like static-object pose estimations that are well suited to serve as landmarks
for vehicle localization in urban environments. These pose estimations are then clustered using the grid-based density-based
spatial clustering of applications with noise algorithm to represent each pole landmark in the form of a source-point model
to reduce computational cost and memory requirements. A reference map that includes pole landmarks is generated offline
and extracted from a 3-D lidar to be used by a carefully designed particle filter for accurate ego-car localization. The par-
ticle filter is initialized by the fused GPS + IMU measurements and used an ego-car motion model to predict the states of
the particles. The data association between the estimated landmarks by the UKF and that in the reference map is performed
using the iterative closest point algorithm. The RT_MCL is implemented using the high-performance language C++ and
utilizes highly optimized math and optimization libraries for best real-time performance. Extensive simulation studies have
been carried out to evaluate the performance of the RT_MCL in both longitudinal localization and lateral localization.
Keywords Sensor fusion· Kalman filter· Localization· Monte Carlo· Particle filter· Autonomous driving
1 Introduction
Improving safety (Farag 2019a), lowering road accidents,
boosting energy efficiency, enhancing comfort, and enrich-
ing driving experience (Farag 2019b) are the most impor-
tant driving forces behind equipping present-day cars with
Advanced Driving Assistance Systems (ADASs) (Farag and
Saleh 2018a). Many ADAS functions represent incremental
steps toward a hypothetical future of safe fully autonomous
cars (Farag 2020a).
Future ADAS and autonomous vehicles require accu-
rate and reliable self-localization systems (Farag and Saleh
2019a). Accuracy is required because an exact pose estimate
plays a major role in enabling state-of-the-art ADAS func-
tionalities such as the automated lane following (Farag
and Saleh 2019b) or collision avoidance (Farag and Saleh
2018b). Reliability is required because the quality of the
pose estimate must be preserved even in changing environ-
mental factors to maintain the highest safety levels (Farag
2019).
Localization based on satellite systems is available for
decades and undergone several improvements (Woo etal.
2019). Recent systems like RTK-GPS or DGPS present an
efficient solution since they achieve centimeter-level accu-
racy without being augmented by any additional algorithms
(Osterwood and Noble 2017). However, there are several
doubts about their reliability (Modsching etal. 2006). In
urban areas, buildings or other tall road objects that obstruct
the line of sight between the vehicle and the satellites can
decrease the accuracy to several meters (Carlevaris-Bianco
etal. 2015). Therefore, employing reference dense maps
like polygon meshes, grid maps, or point clouds repre-
sents an alternative localization solution with a higher
* Wael Farag
wael.farag@aum.edu.kw; wael.farag@cu.edu.eg
1 College ofEngineering andTechnology, American
University oftheMiddle East, Egaila, Kuwait
2 Electrical Engineering Department, Cairo University, Giza,
Egypt
Journal of Control, Automation and Electrical Systems
1 3
reliability (Levinson and Thrun 2010). However, map-based
approaches have a major downside of requiring a huge size
of memory that immediately becomes unaffordable if larger-
scale maps to be used. To overcome this problem, what is
called “landmark maps” attracted a lot of interest (Schaefer
etal. 2019). In these maps, massive amounts of raw sensory
data points (lidar, radar, and/or camera) are condensed into
a relatively much small number of distinct and marked fea-
tures. This way the required memory usage can be decreased
by several orders of magnitude (Kummerle etal. 2019).
Localization of self-driving cars (SDC) using global navi-
gation satellite systems (GNSS) is not enough as it results in
lateral errors of around 5 m (Miura etal. 2015) (can reach
1m in the best covered areas). SDC requires localization of
lateral/longitudinal errors of less than 0.3 m from the ground
truth (even only 0.1 m is highly recommended) (Levinson
and Thrun 2010). The fusion of GPS and IMU improves
the localization errors, especially in urban canyons or tun-
nels, but the results are far from the requirement (reaches
1.77 m (Lee etal. 2015)) and still not reliable enough for
serious SDC deployment. Therefore, the GP/IMU fusion
needs to be augmented with perception sensors and digital
maps to further bring down (Jo etal. 2015) the location
error to the required target (Alrousan etal. 2020). How-
ever, this augmentation especially while processing digitized
detailed maps results in a highly computational overhead and
expensive hardware requirements that make it not suitable
for affordable ADAS or autonomous driving functionalities
(Schaefer etal. 2019; Sefati etal. 2017).
In this work, a vehicle localization approach, referred
to as RT_MCL, for urban environments is proposed. The
RT_MCL relies on pole-like landmarks extracted from lidar/
radar fused data. Pole-like landmarks typically take several
forms such as traffic signs, telegraph poles, tree trunks,
street lamps and bollards. Several features make them very
convenient and reliable in vehicle localization: (a) they are
abundant and widespread in urban areas, (b) they rarely get
dislocated and stay in their locations for a long time and (c)
they have well-defined rounded geometrical shapes that do
not get affected with season or weather.
The RT_MCL matches the detected poles by the ego-
car sensors with that being registered in a global reference
map. This high-resolution map is being developed precisely
offline using lidar/radar data covering the same which the
ego-car will drive on. The environment in the reference map
is modeled, instead of as spatial poles of fixed positions,
as probabilistic poles whereby every pole is represented by
its Gaussian distribution over its position-prospect values.
Consequently, Bayesian inference (employed by a particle
filter) can favorably weight parts of the map most likely to
be relevant to the ego-car pose, thereby reducing uncertainty
and catastrophic errors. Moreover, to increase the precision
of poles detection the RT_MCL uses the unscented Kalman
filter (UKF) for sensors data fusion to combine the merits of
both lidar and radar and reduces scattering.
In contemporary research, several endeavors are carried
out to address the problem of vehicle localization (Farag
2020b) utilizing pole-like landmarks extracted from lidar
point clouds. This problem can be divided into two subprob-
lems: (a) the first one is the pole detection and its position
estimation and (b) the second one is the pole-based ego-car
pose estimation. For example, Weng etal. (2018) developed
a pole detector by partitioning the space around the ego-car
and counted the number of reflected scan points per voxel.
Detecting poles is done by identifying the vertically con-
nected stacks of voxels which all exceed a certain pre-spec-
ified threshold. Furthermore, the detector fits a cylinder to
all the points associated with the identified stacks of voxels
using RANSAC (Fischler and Bolles 1981). A particle filter
combined with the nearest neighborhood data association is
then used for ego-car pose estimation. Another example is
that Sefati etal. (2017) focused his approach of pole detec-
tion on removing the ground plane from the point cloud
received from sensors. The remaining points are projected
onto a horizontal regular grid, and then, the neighboring
cells are clustered based on occupancy and height and after-
ward fit a cylinder to each cluster. The data association is
done using the nearest neighborhood, and the pose estima-
tion is performed using a particle filter. Kummerle etal.
(2019) refine further the pose estimate by augmenting the
Sefati etal.’s pole detection method (2017) by fitting planes
to building facades constructed from point clouds and fitting
lines to lane markings in stereo camera images. Kummerle
etal. (2019) adopt a Monte Carlo method to solve the data
association problem but use nonlinear least-squares optimi-
zation to refine the estimated pose. Schaefer etal. (2019)
present a complete landmark-based localization system that
relies on poles extracted from 3-D lidar data and is divided
into three modules: the pole extractor, the mapping mod-
ule, and the localization module. The pole detector does not
only consider the laser ray endpoints, but also the free space
in between the laser sensor and the endpoints, and demon-
strate a reliable and accurate vehicle localization based on a
map of pole landmarks on large time scales. The approach
is tested on a long-term dataset that contains 35h of data
recorded over 15months—including varying routes, con-
struction zones, seasonal and weather changes and lots of
dynamic objects.
However, all the mentioned techniques are focusing on
accuracy of pose estimation and precision of pole-landmark
detection, while using extremely powerful hardware that in
many cases not suitable for real deployment is ADAS or
autonomous driving functionalities. Therefore, the com-
plexity of the execution of the mentioned algorithms is not
well addressed. In this paper, a balanced treatment of both
accuracy and real-time performance is considered in the
Journal of Control, Automation and Electrical Systems
1 3
proposed localization algorithm (RT_MCL). To achieve
this goal, the following intermediary steps are implemented:
1. Tailoring the UKF algorithm to fuse multiple radars
and lidars data to reduce scattering and achieve more
accurate pose estimates for stationary pole-like objects
around the ego-car in real time.
2. Tailoring the PF algorithm to employ pole-like land-
marks for ego-car pose estimation in real time.
3. Employing a high-order generic object motion model
(5 state variables that suit the most common road
objects in the development of the UKF and PF to gen-
erate more accurate estimates and improve the overall
performance.
4. Representing the poles in the reference map in a prob-
abilistic form that allows Bayesian inference imple-
mented by the PF to contain map uncertainties and
reduce localization errors.
5. Evaluating the gain of fusion by testing the UKF on
three different cases (lidar + radar, lidar only, and radar
only).
6. Evaluating the PF performance using different particle
populations and under various map uncertainties.
7. Evaluating the real-time performance of both the UKF
and PF on a moderate computational platform.
8. Employing the GB-DBSCAN clustering algorithm to
detect potential objects from the lidar/radar fused data,
and finding their centroids.
9. Employing the RANSAC algorithm to extract the pole
parameters by fitting circles (which represent poles
shape) to the clusters identified by the GB-DBSCAN.
10. Employing the ICP algorithm for the data association
between the detected poles in sensors data and the reg-
istered poles in the reference map.
2 The RT_MCL Method Overview
The flowchart of the proposed RT_MCL autonomous car
localization method is shown in Fig.1. The input to the
proposed algorithm can be sorted into four items:
(a) The reading of a global navigation satellite systems
(GNSSs) such as GPS integrated with that of an inertial
motion units (IMU) to provide the initial estimate of the
ego-car pose. The GPS provides a low accuracy initial
position, and the IMU provides the incremental change
in this position as well as the heading angle estimate.
The main principle of GPS-IMU fusion is correcting
accumulated errors of dead reckoning in intervals with
absolute position readings (Suhr etal. 2017). The fused
output is used in the initialization step of the particle
filter.
(b) The odometry readings in terms of the ego-car velocity
and yaw rate are filtered from high-frequency noise and
used as a control input to the particle filter.
Fig. 1 RT_MCL roadmap
GPS/IMU
Measurements
Start
Lidar/Radar
Measurements
Odometry
Speed/Steering
Sensor
Fusion
Particle Filter
EgoCar
Pose
Reference
Map
Pole-like
Landmarks
UKF Sensor
Fusion
Noise
Filtering
GB-DBSCAN
Clustering
Data Association
ICP
Journal of Control, Automation and Electrical Systems
1 3
(c) The lidar/radar measurement data are fused using the
proposed UKF to detect objects in the ego-car sur-
roundings. The fused data are then clustered using the
GB-DBSCAN algorithm (Dietmayer etal. 2012) to
identify potential objects. The algorithm is tuned in a
way to identify more likely pole-like static objects. The
Doppler velocity of radar detections is used to discard
detections originating from moving objects.
(d) An offline high-definition 3-D point cloud reference
map with identified pole-like landmarks coordinates is
used as an input to the RT_MCL localizer. The identi-
fied pole-like objects by the GB-DBSCAN algorithm
are then aligned with the identified pole-like landmarks
in a process called “data association.” In other words,
the association of the coordinates of the poles resulted
from the clustering with that from the map using the
iterative closest point (ICP) algorithm (Lu and Milios
1997). This association is very crucial for localization
accuracy.
The core of the RT_MCL localizer is the particle filter
(Thrun 2002), which will be explained later in detail. The
particle filter uses the four inputs to determine the pose of
the ego-car with much higher accuracy than the one received
from the GPS/IMU fusion.
3 The Unscented Kalman Filter Overview
The Kalman filter (KF) (Zarchan and Musoff 2013) is a
system of equations working together to form a predictor-
update cyclic optimal estimator that minimizes the estimated
error covariance. The KF estimates the state
x∈Rn
given
the measurement
z∈Rm
of a discrete-time controlled pro-
cess that is modeled by a set of linear stochastic difference
equations.
However, as KF is only limited to linear processes,
accordingly, it is not suitable to the radar measurement pro-
cess which is inherently nonlinear. Therefore, the unscented
Kalman filter is introduced (Wan and Van Der Merwe 2000)
to overcome this limitation. The UKF is a derivative-free
alternative to extended Kalman filter (EKF) (Einicke and
White 1999) that uses a deterministic sampling approach.
The UKF utilizes as well the predict-update two-step pro-
cess; however, they are now augmented with further steps
like generation and prediction of sigma points as shown in
Fig.2.
In the UKF process, the state Gaussian distribution is
represented using a minimal set of carefully chosen sample
points, called sigma points.
nx=2n+1
sigma points are
selected based on the following rule:
where
Xk
is the sigma-point matrix which includes
nx
sigma-
point vectors,
Pk
is the KF process estimate covariance, and
𝜆
is a design parameter that determines the spread of the gen-
erated sigma points and usually takes the form
𝜆=3−nx
.
In the sigma-point prediction step, each generated sigma
point is inserted in the UKF nonlinear process model given
in Eq.(2) to produce the matrix of the predicted (estimated)
sigma points, which has an
n×nx
dimension.
where
𝜈k
is the process white noise which is the Gaussian
distribution (
N
) with zero mean and covariance matrix
Qk
.
In the next step, the predicted state-mean and covariance
matrices are calculated from the predicted sigma points as
given in Eq.(3):
where
wi
’s are the sigma-point weights that are used here to
invert the spreading of the sigma points. These weights are
calculated as shown in Eq.(4):
(1)
X
k=
[
xkxk+
√(
𝜆+nx
)
Pkxk−
√(
𝜆+nx
)
Pk
]
(2)
̂
Xk+1
=f
(
X
k
,𝜈
k)
(3)
̂
x
k+1=
n
x
∑
i=0
wi
̂
Xk+1,
i
̂
P
k+1=
2n
x
∑
i=0
wi
(
̂
Xk+1,i−̂xk+1
)(
̂
Xk+1,i−̂xk+1
)T
Fig. 2 UKF roadmap
Journal of Control, Automation and Electrical Systems
1 3
In the measurement prediction step, each generated sigma
point is inserted in the UKF nonlinear measurement model
given in Eq.(5) to produce the matrix of the predicted meas-
urement sigma points, which has an
n×nx
dimension.
In the next step, the predicted measurement mean and
covariance matrices are calculated from the predicted sigma
points as well as the measurement noise covariance matrix
R as given in Eq.(6):
where
wi
’s are the sigma-point weights that are determined
using Eq.(4),
Sk
is the measurement covariance matrix, and
E{.}
is the expected value of the measurement white noise
𝜔k
, which is the Gaussian distribution (
N
) with zero mean
and covariance matrix
R
.
The final step is the UKF state update, where the UKF
gain matrix (
K
) is calculated as in Eq.(7) using the calcu-
lated cross-correlation matrix (
T
) between the sigma points
in the state space and the measurement space. The gain is
used to update the UKF state vector (
x
) as well as the state
covariance matrix (
P
).
4 The Road Object Model
The state of the road object (Schubert etal. 2008) is deter-
mined by the five variables grouped into the state vector
x
shown in Eq.(8), where
px
and
py
are the object position
in the x-axis and y-axis, respectively, as shown in Fig.3,
v
(4)
w
i=
𝜆
𝜆+nx
,i=0
w
i=1
2
(
𝜆+n
x)
,i=1…n
x
(5)
̂
Zk+1
=h
(̂
X
k+1)
(6)
̂z
k+1=
n
x
∑
i=0
wi
̂
Zk+1,
i
S
k+1=
2n
x
∑
i=0
wi
(
̂
Zk+1,i−̂zk+1
)(
̂
Zk+1,i−̂zk+1
)
T
+
R
R
=E
{
𝜔
k
.𝜔
T
k}
(7)
Tk+1=
2n
x
∑
i=0
wi
(
̂
Xk+1,i−̂xk+1
)(
̂
Zk+1,i−̂zk+1
)
T
K
k+1=Tk+1S−1
k+1
xk+1=̂xk+1+Kk+1
(
̂zk+1−zk+1
)
Pk+1
=
̂
P
k+1
−K
k+1
S
k+1
KT
k+1
.
is the magnitude of object velocity derived from its x and
y components,
vx
and
vy
, respectively,
𝜓
is the yaw angle
(object orientation) and
̇𝜓
is the rate of change of the object-
yaw angle.
The nonlinear
xk+1
=f
(
x
k
,𝜈
k)
difference equation that
describes the motion model of the object is derived based
on the state vector
x
and presented in Eqs.(9)–(11).
(8)
x
=
⎡
⎢
⎢
⎢
⎢
⎢
⎣
px
py
v
𝜓
̇𝜓
⎤
⎥
⎥
⎥
⎥
⎥
⎦
,v=�v2
x
+v2
y,𝜓=tan−1v
y
v
x
(9)
x
k+1=xk+
v
k
̇𝜓 k
sin
𝜓k+̇𝜓 kΔt
−sin
𝜓k
vk
̇𝜓 k−cos 𝜓k+̇𝜓 kΔt+cos 𝜓k
0
Δt
0
+𝜈
k
(10)
𝜈
k=
1
2(Δt)
2
cos
𝜓k
.𝜈a,k
1
2(Δt)2sin 𝜓k.𝜈a,k
Δt.𝜈a,k
1
2(Δt)2.𝜈̈𝜓 ,k
Δt.𝜈
̈𝜓 ,k
Fig. 3 A road object motion model
Journal of Control, Automation and Electrical Systems
1 3
where
Δt
is the time difference between two consecutive
samples,
̈𝜓
is the yaw acceleration,
a
is the longitudinal
acceleration,
𝜈a,k
is the longitudinal acceleration noise at
sample
k
with a standard deviation
𝜎2
a
,
𝜈̈𝜓 ,k
is the yaw accel-
eration noise at sample
k
with a standard deviation
𝜎2
̈𝜓
.
5 GPS/IMU Fusion
This work in paper uses an IMU unit (an integrated acceler-
ometer, gyroscope and magnetometer) and GPS to determine
initial orientation and position of the ego-car. The following
will shed the light on the developed IMU/GPS fusion algo-
rithm for this purpose:
1. Setting the sampling rates: the accelerometer and gyro-
scope run at relatively high sample rate (160Hz). Con-
versely, the GPS and the magnetometer run at relatively
low sample rate (1Hz). In this fusion algorithm, the
magnetometer and GPS samples are processed together
at 1Hz, and the accelerometer and gyroscope samples
are processed together at 160Hz; however, only one out
of 160 samples used in the fusion process.
2. Setting up the fusion filter: The fusion filter uses an
extended Kalman filter (EKF) (Farag 2020) to track
orientation (as a quaternion), velocity, position, sensor
biases and the geomagnetic vector.
3. Prediction step: This step takes the accelerometer and
gyroscope samples from the IMU as inputs. This method
predicts the states one-time step ahead based on the
accelerometer and gyroscope. The error covariance of
the EKF is updated here.
4. Update step: This step updates the EKF states based on
GPS samples by computing a Kalman gain that weights
the various sensor inputs according to their uncertainty
(Farag 2021). An error covariance is also updated here,
this time using the Kalman gain as well.
5. The above procedure is implemented at a low update
rate (1Hz), which makes the results prone to signifi-
cant localization errors; therefore, the output of the
IMU + GPS fusion is used as an initialization for the
RT_MCL algorithm proposed in this paper.
(11)
Δt=t
k+1
−t
k
𝜈
a,k∼N0, 𝜎2
a
𝜈
̈𝜓 ,k∼N
0, 𝜎2
̈𝜓
6 Sensor Fusion Using UKF
The processed lidar measurement vector includes the
moving object centroid position (
px
and
py
) in Cartesian
coordinates (Eq.(12)), while the radar measurement vec-
tor includes the moving object centroid position (
𝜌
,
𝜑
) and
radian velocity (
̇𝜌
) in polar coordinates as represented by
Eq.(13). The mapping function that specifies how lidar Car-
tesian coordinates got mapped to the radar polar coordinates
is given as well in Eq.(14).
According to Fig.4, each sensor has its own prediction
update scheme; however, both sensors share the same state
prediction scheme. The belief about the object’s position and
velocity is updated asynchronously each time the measure-
ment is received regardless of the source sensor.
In Fig.4, after computing the elapsed time between con-
secutive sensor reading (
Δt
), the sigma points (
Xk
) are gen-
erated using Eq.(1), and then, a next time-step prediction
for sigma points (
̂
Xk+1
) is carried out using Eq.(2) while
employing the moving object nonlinear motion model given
in Eq.(9). The resulted predicted sigma points are then used
to compute the state-mean (
̂xk+1
) and covariance (
̂
Pk+1
)
matrices using Eq.(3).
Then, the fusion technique thus branches into two direc-
tions based on the source of the last sensor data measure-
ment (Farag 2020). If the source is a radar, and employ-
ing the nonlinear radar measurement model (Eq.(13)), the
predicted measurement sigma points (
̂
Zk+1
) are calculated
from the predicted state sigma points (
̂
Xk+1
) using Eq.(5).
Then, the predicted measurements (
̂zk+1
) and their covari-
ance matrix (
Sk+1
) are calculated based on Eq.(6) using
the measurement noise covariance matrix
Rradar
given in
Eq.(15). Then,
̂xk+1
and
̂zk+1
are used to compute the cross-
correlation matrix (
Tk+1
) between the sigma points in the
state space (
̂
Xk+1
) and the measurement space (
̂
Zk+1
) as in
Eq.(7). Based on this cross-correlation matrix, the Kalman
filter gain (
Kk+1
) is then calculated and used to compute the
updated object’s state vector (
xk+1
) and covariance matrix
(
Pk+1
) as shown by Eq.(7).
(12)
z
lidar =
px
py
,zradar =
𝜌
𝜑
̇𝜌
(13)
h
(x)=
𝜌
𝜑
̇𝜌
=
p2
x+p2
y
arctan py
px
pxvx+pyvy
p2
x+p2
y
(14)
px=𝜌cos (𝜑),py=𝜌sin (𝜑)
Journal of Control, Automation and Electrical Systems
1 3
where
𝜎𝜌
is the noise standard deviation of the object radial
distance,
𝜎𝜑
is the noise standard deviation of the object
heading (bearing),
𝜎̇𝜌
is the noise standard deviation of the
object-yaw rate.
If the measurement data source is a lidar sensor, and
employing the linear lidar measurement model (
Hlidar
)
shown in Eq.(16), the predicted measurement sigma points
(
̂
Zk+1
) are directly calculated from (
̂
Xk+1
). Then, the pre-
dicted measurements (
̂zk+1
) and their covariance matrix
(
Sk+1
) are calculated based on Eq.(6) using the measurement
noise covariance matrix
Rlidar
given in Eq.(16). Then,
̂xk+1
and
̂zk+1
are used to compute the cross-correlation matrix
(
Tk+1
) between the sigma points in the state space (
̂
Xk+1
)
and the measurement space (
̂
Zk+1
) as in Eq.(7). Based on
this cross-correlation matrix, the Kalman filter gain (
Kk+1
)
is then calculated and used to compute the updated object’s
state vector (
xk+1
) and covariance matrix (
Pk+1
) as shown by
Eq.(7) (Schubert etal. 2008).
where
𝜎
p
x
and
𝜎
p
y
are the noise standard deviations for the
object
x
and
y
positions, respectively.
7 Clustering andData Association
Figure4 presents the lidar and radar data fusion technique
employing the UKF. The UKF produces point clouds that
provide information about objects in the ego-car surround-
ing. Clustering is a key tool to extract these objects’ infor-
mation (geometry and poses) from UKF point clouds. The
objective of the clustering is to represent each object in the
form of a source-point model to reduce computational cost
and memory requirements.
UKF data processing is performed using clustering and
association algorithms. DBSCAN (Sander etal. 1996) is
an unsupervised clustering algorithm that groups together
data points if the density of the points is high enough. It
requires two parameters to determine the density. The first
parameter is ε describing the radial distance from a point
p, that is being evaluated. The second parameter is minPts,
which is the least number of detections that have to be within
a distance ε from p, including p itself, to form a cluster.
(15)
R
radar =
⎡
⎢
⎢
⎣
𝜎
2
𝜌
00
0𝜎2
𝜑0
00𝜎2
̇𝜌
⎤
⎥
⎥
⎦
(16)
H
lidar =
[10000
01000
]
R
lidar =E
𝜔.𝜔T
=
𝜎2
px
0
0𝜎2
py
By choosing ε and minPts, it is then possible to decide the
necessary density for a group of points to form a cluster;
however, these fixed parameters are not convenient if various
types and topologies of road objects need to be detected. As
an improvement to this algorithm, GB-DBSCAN is intro-
duced (Dietmayer etal. 2012). It works in the same manner
as DBSCAN but does not have fixed parameters. Instead,
a polar grid is created according to the radial and angu-
lar resolution of the sensor. Instead of looking at a circu-
lar search area with a fixed radius, GB-DBSCAN can use
a more dynamic, elliptic, search area. The most distinctive
feature of a pole-like object is that the density of point cloud
at its position is far greater than its surrounding.
While GB-DBSCAN is used for coarse clustering, the
RANSAC (Fischler and Bolles 1981) is used to fine-tune
the clustering and associate geometrical shape proposals to
potential coarse clusters. Therefore, RANSAC here is used
to fit a circle (which represents poles shape) to all points (
N
)
in each cluster. After successful fitting, RANSAC can extract
the pole parameters (center (
̂xc,̂yc
) and radius (
̂r
)) from the
fitted circles by solving the following equation:
Establishing a matching link between the detected pole-
like objects in the lidar/radar data (the source) and the pole-
like landmarks in the reference map (the target) is referred
to as the data association step. This step is necessary for the
proper execution of the particle filter. The data association
step is carried out using the iterative closest point (ICP)
algorithm. Instead of working on the whole point clouds in
both the source and target as per the standard application
of this algorithm (Suhr etal. 2017), only the centroids of
pole-like objects are considered for the matching process.
This way a huge memory and processing time will be saved.
The ICP algorithm works by iterating a two-step proce-
dure until convergence. The first step is matching each point
in a set of source points, X (lidar/radar data), to the closest
point in a set of target points, Y (reference map), and the sec-
ond step is finding the optimal transform between the source
and target sets, given the assignments. Matching points by
distance is a computationally efficient operation if a k-d tree
data structure is used to store Y. Here, the points in X are
denoted as
xi
and the matched (closest) point from
xi
in Y is
denoted as
yi
. The basic 2D ICP version minimizes the sum
of squared distances between source and target points to find
the rotation angle
𝜑
, encoded by a rotation matrix
R(𝜑)
and
the translation t.
(17)
min{
1
N
N
∑
i=1[√(
xi−̂xc
)
2
+
(
yi−̂yc
)
2
−̂r
]2}
(18)
min
𝜑,t
{N
∑
i=
1(
yi−
(
R(𝜑)xi−t
))
T
(
yi−
(
R(𝜑)xi−t
))}
Journal of Control, Automation and Electrical Systems
1 3
To perform the data association, both observations and
landmarks in the reference map should have the same coor-
dinate system. Observations in the ego-car coordinate sys-
tem (
xc
and
yc
) can be transformed into map coordinates
xm
and
ym
by ratifying them through a homogenous transfor-
mation matrix shown in Eq.(19) that performs rotation and
translation using map particle/ego-car coordinates (
xp
and
yp
), and the rotation angle
𝜃
.
(19)
⎡
⎢
⎢
⎣
xm
ym
1
⎤
⎥
⎥
⎦
=
⎡
⎢
⎢
⎣
cos 𝜃−sin 𝜃xp
sin 𝜃cos 𝜃yp
0 01
⎤
⎥
⎥
⎦
×
⎡
⎢
⎢
⎣
xc
yc
1
⎤
⎥
⎥
⎦
.
8 Particle Filter Overview
Particle filtering uses a finite set of particles to represent
the posterior distribution
bel(
x
t)
of some stochastic pro-
cess given noisy and/or partial observations p
(
z
t|
x
t)
. It is
an approximate realization of the recursive Bayesian filter
stated in Eq.(20) where
𝜁
is a normalization factor.
Typically, the number of particles,
M
, should be large
enough to represent the belief
bel(
x
t)
accurately to some
extent. The set of particles are denoted at time step t as
Each particle x
[i]
t
is a hypothesis about the actual state
at time t. Table1 describes a simple implementation of the
particle filter, and Fig.5 depicts the whole flowchart. The
recursive procedure is performed whenever a measurement
(20)
bel(
x
t)
←𝜁p
(
z
t|
x
t)
bel
(
x
t−1)
(21)
𝜒t
=
{
x
[i]
t|
1≤i≤M
}
Processed
Sensor Data
Initialize UKF
Matrices
First
Measurement?
Initialize state x
and covariance
matrices
Compute
elapsed time Δt
Generate Sigma
Points
Use Δt in Sigma
Points Prediction
Radar or
Lidar?
Predict Measurement Mean
and Covariance Matrices
Compute Cross Correlations of
Sigma points in state and
measurement spaces
Lidar Radar
Yes No
Predict
Update
Compute State Mean and
Covariance Matrices
Compute Kalman Gain,
State Update, and
Covariance Matrix
Update
Compute Predicted
Measurement Sigma
Points
Predict Measurement
Mean and Covariance
Matrices
Fig. 4 Lidar and radar data fusion using UKF
Journal of Control, Automation and Electrical Systems
1 3
update (
zt
) along with a new set of odometry data (
ut
)
becomes available.
The prediction step is implemented by the loop at line
2. One-state hypothesis (
x[m]
i
) is generated for each particle
based on its current state and the state transition distribu-
tion p
(
x
t|
u
t
,x
[m]
t−1)
, which is computed using the particle (car)
motion model described in Sect.4. An importance factor
(weight)
w[i]
t
is calculated or updated for each newly gener-
ated hypothesis using a multivariate Gaussian probability
density function for each observation and combines the
likelihood of all the observations by taking their products
as given in Eq.(22):
where
z[t]
i
is the
ith
landmark observation for particle
m
at
step
t
,
𝜇[t]
i
is the predicted (mean) measurement for the land-
mark corresponding to the
ith
observation at step
t
,
𝛴
is the
(22)
w
[m]
t=
N
i=1
exp
−1
2
z[t]
i−𝜇[t]
i
T𝛴−1
z[t]
i−𝜇[t]
i
2
𝜋𝛴
covariance matrix of the measurements and
N
is the total
number of measurements for one particle.
Afterward, in lines 3, new
M
particles are resampled
from the previous set of particles (
̄𝜒 t
) proportionally to
their importance factors (
𝛼
w
[i]
t
) that have been determined
in line 2.ii, where
𝛼
is a normalization factor. This resam-
pling yields
𝜒t
, the updated posterior approximation. Note
that
𝜒t
generally will contain duplicates, taking the places of
particles that were not drawn in line 3.i as they have evolved
into less likely hypotheses.
To check the convergence of the particle filter, the
weighted-average error (
Errorweighted
) of all the particles is
used as a convergence indicator. The
Errorweighted
is com-
puted as given in Eq.(23), by simply calculating the root-
squared error between each particle state
pi
and the ground
truth
gi
=
[
x
gt
i
y
gt
i
𝜃
gt
i]
and multiplying it by the particle’s
weight, and then sum the product for all particles and divide
the summation by the aggregated particle weights.
9 Implementation oftheRT_MCL
Both UKF and PF are implemented using the high-perfor-
mance language GCC C ++ (GCC 2020) on Ubuntu Linux
operating system (Ubuntu Linux 2020). This combination is
fitted for the required real-time performance. A C ++ numer-
ical solver, matrix, and vector operations package “Eigen”
(2020) is used to numerically calculate the object model and
effectively perform the predict and update steps.
The object motion model described by Eqs.(9)–(11)
includes several noise parameters that need to be carefully
(23)
Error
weighted =
M
i=1wi
pi−gi
M
i=1
w
i
.
Table 1 Particle filter pseudo-code
Fig. 5 Particle filter algorithm
flowchart
EgoCar Initial
Pose Yaw Rate Velocity Pole Landmarks
Positions
Feature
Measurements
Lidar/Radar
UKF Sensor
Fusion
Reference Map
GPS+IMU
Fusion Odometry
Initialization Prediction
Step
Update Step
(Weights) Resample
Start
Sensors
Noise
Sensors
Noise
Map
Uncertainty
Journal of Control, Automation and Electrical Systems
1 3
set. Table2 presents the fine-tuned parameters for both UKF
and PF.
The UKF design is considered consistent if the estima-
tion error is unbiased, i.e., has zero mean, and that the actual
mean square error of the filter matches the filter-calculated
state covariance. As a measure of filter consistency, the time-
average normalized innovation squared (NIS) (Piché 2016)
can be used to fine-tune the noise parameters. The metric,
described by Eq.(24), is used to calculate the
NIS
value at
each sample
k
and then averaging these values (
NISAverage
)
over a moving window of measurements of length
N
.
The proper initialization of the UKF is very crucial to its
subsequent performance (Zhao and Huang 2017). The main
initialized variables are the state estimate vector (
x
) and its
estimate covariance matrix (
P
). The first two terms of the
state vector
x
given by Eq.(8) are
px
and
py
, which are sim-
ply initialized using the first received raw sensor measure-
ment. For the other three terms of the state vector, intuition
augmented with some trial and error is used to initialize
these variables as listed in Table3. The state covariance
matrix is initialized as a diagonal matrix that contains the
covariance of each variable estimate (Eq.(25)).
To check the performance of the UKF, in terms of how
far the estimated results from the true results (ground truth),
the root-mean-squared error (RMSE) given in Eq.(26) is
used. The metric is calculated over a moving window of
measurements of length
N
(Zhao and Huang 2017). x
est
k
is
the estimated state vector of the UKF, and
xtrue
k
is the true
state vector supplied by the simulator or given as training
data during the UKF design phase.
(24)
NISk
=
(
z
k+1
−̂z
k)T
S−1
k(
z
k+1
−̂z
k)
NIS
Average =1
N
k=N
∑
k=1
NIS
k
(25)
P
=diag
(
𝜎2
̂p
x
,𝜎2
̂p
y
,𝜎2
̂v,𝜎2
̂𝜓 ,𝜎2
̂
̇𝜓
)
(26)
RMSE
=
1
N
k=N
k=1
xest
k
−xtrue
k
2
The initialization of the PF is very crucial as well for its
performance. Therefore, it is carried out as follows:
(a) The number of particles is set to
M=50
. In the litera-
ture (Lu and Milios 1997; Farag 1998), this number
usually ranges from 100 to 1000; however, it is a com-
promise between accuracy and computational speed.
Several experimental trials are carried out and show
that 50 produces the required accuracy and real-time
performance.
(b) The PF state vector (particles poses) is initialized from
the output of the GPS + IMU fusion (
p
x
GPS ,p
y
GPS ,𝜃
GPS
)
as follows:
where p
[m]
x
, p
[m]
y
and
𝜃[m]
Particle
represent particle
m
initial-
ized pose.
𝜎
x
GPS
,
𝜎
y
GPS
and
𝜎
𝜃
GPS
represent the GPS + IMU
reading noise standard deviations.
𝜎xartificial
,
𝜎yartificial
and
𝜎
𝜃
artificial
represent the artificial noise added to each initial
particle position for the purpose of randomization that
helps in the conversion of the PF (Lu and Milios 1997).
The values of these parameters are listed in Table4.
(c) The particle importance weights are all initialized with
the uniform distribution
w
[m]=
1
M
.
(d) As RT_MCL is using probabilistic maps, each pole-like
landmark is represented by Gaussian distributions
N(
pxPole ,𝜎2
x
Pole )
and N
(
pyPole ,𝜎2
y
Pole )
to model the uncer-
tainties in their positions. The values of the standard
deviations
𝜎
x
pole
and
𝜎
y
pole
are listed in Table4.
(27)
p[m]
x∼N
(
pxGPS ,𝜎2
xGPS
+𝜎2
xartificial
)
p[m]
y∼N(pyGPS ,𝜎2
yGPS
+𝜎2
yartificial )
𝜃[m]
Particle ∼N
(
𝜃GPS,𝜎2
𝜃
GPS
+𝜎2
𝜃
artificial )
Table 2 The UKF and object model noise parameters
Parameter UKF/PF Parameter UKF
𝜎a
m/s21.0
𝜎
p
y
(lidar) m 0.15
𝜎̈𝜓
rad/s20.6
𝜎𝜌
(radar) m 0.3
𝜎̇𝜓
rad/s 0.06
𝜎𝜑
(radar) rad 0.03
𝜎
p
x
(lidar) m 0.15
𝜎̇𝜌
(radar) m/s 0.3
Table 3 Initialization of UKF states
Parameter UKF Parameter UKF
px
m 1st raw x-reading
py
m 1st raw y-reading
v
m/s 0.0
𝜓
rad 0.0
̇𝜓
rad/s 0.0
𝜎̂px
m 1.0
𝜎
̂p
y
m 1.0
𝜎̂v
m/s
√1000
𝜎̂𝜓
rad
√1000
𝜎
̂
̇𝜓
m/s2
√1000
Table 4 Initialization of the particle filter
Parameter PF Parameter PF
𝜎xGPS
0.3m
𝜎xartificial
10m
𝜎
y
GPS
0.3m
𝜎
y
artificial
10m
𝜎𝜃GPS
0.01rad
𝜎𝜃artificial
0.05rad
𝜎
x
pole
0.3m
𝜎
y
pole
0.3m
Journal of Control, Automation and Electrical Systems
1 3
To check the performance of the PF, in terms of how far
the estimated poses from the ground truth, the cumulative
mean absolute error for each pose variable given in Eq.(28)
is used. The metric is calculated over a moving window of
measurements of length
N
.
xbest
i
,y
best
i
,𝜃
best
i
are the estimated
pose variables of the PF, and x
gt
i
,y
gt
i
,𝜃
gt
i
are the ground-truth
variables supplied by the simulator or given as training data
during the PF design phase.
10 Testing andEvaluation Results
Extensive trial-and-error attempts are used to tune the hyper-
parameters of the RT_MCL. However, to be more consistent
and accurate, numerical key performance indicators (KPIs)
are constructed and coded as in Eqs.(24), (26) and (28) to
evaluate the performance of the localization technique under
the given set of hyper-parameters.
Several test tracks have been used to evaluate the per-
formance of the RT_MCL under different sets of hyper-
parameters in an iterative tuning process (Farag 2019; 2020).
The test cases and data are generated using the open-source
(28)
Xerror =
1
N
N
∑
i=1|
|
|xbest
i
−xgt
i|
|
|
Yerror =
1
N
N
∑
i=1|
|
|ybest
i
−ygt
i|
|
|
Yaw
error =
1
N
N
∑
i=1|
|
|
𝜃best
i
−𝜃gt
i
|
|
|
.
urban CARLA driving simulator (2020). An example of
these test tracks is shown in Fig.6. This track is 754 m long
with several curvatures and includes 42 pole-like landmarks
to emulate urban driving.
Table5 presents the testing results of the lidar/radar
fusion algorithm that uses the UKF. The performance eval-
uation is carried out on test tracks to detect road objects
(cyclists, cars, pedestrians and pole-like landmarks). The
RMSE KPI (Eq.(26)) is used to evaluate the UKF perfor-
mance based on the five state variables:
px
,
py
,
vx
,
vy
, and
𝜓
. The KPI is comparing each estimated state variable to its
ground-truth value and finding the error. The lower the value
of the KPI, the better the performance.
To assess the significance of the fusion between lidar and
radar in tracking, the UKF is tested in one time with meas-
urements from lidar-alone and another time with measure-
ments from radar-alone. The results reported in Table5 show
how fusion makes the difference and substantially improves
accuracy. The estimation of all state variables is spectacu-
larly improved. For example, the RMSE of x-position (
px
)
estimation is reduced by 60% compared to “lidar-alone”
and 60% compared to “radar-alone” estimations. Moreover,
Fig. 6 Ego-car localization
results in the test track
Table 5 Performance evaluation of the UKF
State var Cyclist Car Pedestrian Pole
px
0.0648 0.1857 0.0652 0.0324
py
0.0809 0.1899 0.0605 0.0433
vx
0.1452 0.4745 0.5332 0.0032
vy
0.1592 0.5075 0.5442 0.0054
𝜓
0.0392 0.2580 0.2075 0.0075
Journal of Control, Automation and Electrical Systems
1 3
the RMSE of x-velocity (
vx
) estimation is reduced by 30%
compared to “lidar-alone” and 26% compared to “radar-
alone” estimations. The NIS values are calculated as well
for “lidar-alone” and “radar-alone” cases to test the consist-
ency of the UKF in their cases. The reported values show
that fusion significantly improves the consistency. The NIS
values that exceed the 95% threshold have been reduced by
31% compared to the “lidar-alone” and 38.5% compared to
the “radar-alone” ones.
Figures6, 7 and 8 present an example of the testing and
simulation results of the MCL algorithm that uses the com-
bination of the UKF and PF while employing a probabilistic
reference map. The performance evaluation is carried out on
test tracks to detect road pole-like landmarks using the UKF
and using these detections by the GB-DBSCAN and the PF
to localize the ego-car on the global map. In the mentioned
figure, both the estimated pose values and the ground truth
are drawn on top of each other due to the small reported
Fig. 7 Ego-car orientation estimation in the test track
Fig. 8 The ego-car speed and yaw rate during driving three laps in the test track
Journal of Control, Automation and Electrical Systems
1 3
errors as displayed in Table6. The RMSE KPI (Eq.(28)) is
used to evaluate the PF performance based on the three pose
variables:
x
,
y
and
𝜃
. The KPI is comparing each estimated
state variable to its ground-truth value and finding the error
(Farag etal. 1997). The lower the value of the KPI, the better
the performance. Several experiments have been carried out
using the PF with different numbers of particles to optimize
its real-time performance. The results in Table7 show that
the number of particles can go down to “25,” and the PF can
still produce good results, fast execution time with robust
convergence (while “15” is divergent). However, to ensure
more robustness, the number of particles of “50” is consid-
ered the most appropriate selection with a delicate balance
between the achieved accuracies, real-time performance and
convergence. It is clear from the table that above 50 not
much improvements in precision are achieved and a margin
of safety is required to enhance robustness; therefore, 25 is
not selected.
The PF performance is also studied under various uncer-
tainties of the reference map. The uncertainties are modeled
by the standard deviation of the positions of the pole-like
landmarks stated in the reference map. Table8 shows that
an accurate reference map is crucial to the pose estimation
of the ego-car; however, the RT_MCL shows that it is able
to handle uncertainties up to 2.0 m (
2𝜎pole
) in map-poles
poses and still can localize the ego-car with less than 30cm
of error.
Figure9 shows the convergence performance of the par-
ticle filter during the initialization phase using 50 particles.
The error stabilizes after 100 time steps. Figure10 also
shows the performance of the particles’ weights for one lap
travel on the test track. The figure shows that the best weights
are significantly higher the average weights which a sign of
convergence robustness. Moreover, there is a kind of inverse
relationship between the number of detected landmarks and
the value of the best and average weight. Equation(22) can
be rewritten in a more streamlined form in Eq.(23). The
later equation shows that the weight values are the product of
the likelihood of the pole-landmark observation represented
by a multivariate Gaussian probability density function. The
higher the number of observed landmarks, the more the
chance that some of these landmarks have very small likeli-
hood values that bring the whole product down. After many
experiments, it has been found that the reasonable value for
the number of detected landmarks at each time step for the
robust running of the RT_MCL lies in the range of 4 → 12
landmarks.
The many experimentations of the RT_MCL pipeline
proved to be fast enough in execution to be used in real time.
Using an Intel Core i5 with 1.6GHz and 8GB RAM which
is a very moderate computational platform (Nagiub and
Farag 2013), the following measurements (Table9) are col-
lected for the execution of the RT_MCL for a single ego-car
pose estimation based on 12 pole-like landmarks (Fig.11).
Table9 shows it takes around 8.2ms to execute the whole
pipeline for single-pose estimation. Most localization func-
tions run at 10Hz to 30Hz speed. Therefore, the proposed
RT_MCL satisfies comfortably even the upper end of this
requirement.
By considering that the lidar/radar measurements are col-
lected at approximately 30 fps rate (Yurtsever etal. 2020),
then the measurement cycle is 33.3ms which is large enough
to be utilized for considering more than 50 landmark detec-
tions using UKF according to the data in Table9.
(23)
w
[m]
t=
N
j=1
exp
−
z[t]
xj−𝜇[t]
xj
2𝜎2
xpole
−
z[t]
yj−𝜇[t]
yj
2𝜎2
ypole
2𝜋𝜎x
pole
𝜎y
pole
Table 6 Sensor fusion evaluation of the UKF (bicycle track)
Lidar + Radar Lidar only Radar only
RMSE—
px
0.0648 0.1612 0.2031
RMSE—
py
0.0809 0.1464 0.2539
RMSE—
vx
0.1452 0.2082 0.1971
RMSE—
vy
0.1592 0.2129 0.1871
RMSE—
𝜓
0.0392 0.0540 0.0480
NIS—Average 2.2797 1.6941 2.6576
NIS—Min 0.0012 0.04874 0.11309
NIS—Max 14.749 12.997 12.183
NIS > 95% Threshold 2.2% 3.2% 5.2%
Table 7 The particle filter with different numbers of particles (Color
table online)
Table 8 Effect of the landmark standard deviation
𝜎
x
pole
𝜎
y
pole
x-error y-error Yaw error
0.3 0.3 0.1143 0.1154 0.0040
0.5 0.5 0.1730 0.1633 0.0057
1.0 1.0 0.2926 0.2736 0.0098
Journal of Control, Automation and Electrical Systems
1 3
11 Conclusion
In this paper, a real-time Monte Carlo localization (RT_
MCL) method for autonomous cars is proposed, imple-
mented and described in detail. The method uses a tai-
lored unscented Kalman filter to perform data fusion for
the mounted lidar and radar devices on the ego-car. The
raw data of the lidar/radar are getting fused using the UKF
and then getting clustered using both GB-DBSCAN and
RANSAC algorithms to produce the detected pole-like land-
marks’ poses. These detected landmarks are then associated
with the ones in the supplied reference map using the ICP
Fig. 9 Performance during the particle filter initialization
Fig. 10 Performance of particle weights during driving one lap
Journal of Control, Automation and Electrical Systems
1 3
algorithm. Then, a tailored particle filter is designed to pro-
duce estimated ego-car poses measured on the global map
coordinates.
The RT_MCL method is fully implemented using GCC
C++ in addition to advanced math libraries to optimize its
real-time performance. The design steps, initialization, and
tuning of both the UKF and the PF are described in detail.
The initialization and consistency evaluation of both filters
have been explained as well. The generic object motion
model employed by both UKF and PF is comprehensive and
is described using five state variables.
The validation results show that the proposed method is
reliably able to detect pole-like landmarks and to estimate
the ego-car pose with an 11-cm mean error in real time using
only 50 particles. The measured throughput (execution time)
using an affordable CPU proved that the RT_MCL pipe-
line is very suitable for real-time ADAS or self-driving car
localization.
Both UKF and PF have shown that the RT_MCL pipeline
can run at 30Hz while able to handle up to 50 landmark
detections. The reference map is represented in a probabilis-
tic form by representing each landmark position by its mean
centroid and standard deviation. The RT_MCL shows it can
handle uncertainties up to 2.0 m in the landmark centroids
and still can localize the ego-car with less than 30 cm of
error.
In the future, it is intended to: perform further experimen-
tations, better evaluate and enhance the robustness and the
convergence of the proposed technique including on-road
experimentation, add a front-camera to the presented fusion
technique and further investigate the benefits it will add to
the overall localization performance. Furthermore, it will
augment the RT_MCL with other road objects like guard-
rails, sidewalks, curbs, intersection features, etc.
Table 9 RT_MCL execution time for a single-pose estimation
Task Exec. time (ms)
UKF state estimation for 12 landmarks 12 × 0.439
GB-DBSCAN + RANSAC + ICP clustering and
data association
0.835
PF pose estimation 0.739
Control code overhead—20% 1.368
Total 8.210
Fig. 11 The number of detected pole-like landmarks during one lap
Journal of Control, Automation and Electrical Systems
1 3
References
Alrousan, Q., Alzu’bi, H., Pfeil, A., & Tasky, T. (2020). Autonomous
vehicle multi-sensors localization in unstructured environment.
WCX SAE World Congress Experience, SAE International, USA,
April 14, 2020.
CARLA driving simulator. (2020). Retrieved from, https ://carla .org/,
August 16.
Carlevaris-Bianco, N., Ushani, A. K., & Eustice, R. M. (2015). Univer-
sity of Michigan north campus long-term vision and lidar dataset.
International Journal of Robotics Research, 35(9), 1023–1035.
Dietmayer, K., Kellner, D., & Klappstein, J. (2012). Grid-based dbscan
for clustering extended objects in radar data. In IEEE intelligent
vehicles symposium, Alcala de Henares, Spain, June 2012.
Eigen. (2020). Retrieved from, http://eigen .tuxfa mily.org/index
.php?title =Main_Page, March 11, 2020.
Einicke, G. A., & White, L. B. (1999). Robust extended Kalman filter-
ing. IEEE Transactions on Signal Processing, 47(9), 2596–2599.
Farag, W. (1998). Synthesis of intelligent hybrid systems for modeling
and control. Ph.D. Thesis, University of Waterloo, Canada.
Farag, W. (2019a). Cloning safe driving behavior for self-driving cars
using convolutional neural networks. Recent Patents on Computer
Science, 12(2), 120–127.
Farag, W. (2019b). Traffic signs classification by deep learning for
advanced driving assistance systems. Intelligent Decision Tech-
nologies, 13(3), 215–231.
Farag, W. (2019c). Safe-driving cloning by deep learning for autono-
mous cars. International Journal of Advanced Mechatronic Sys-
tems, 7(6), 390–397.
Farag, W. (2019d). Complex trajectory tracking using PID control for
autonomous driving. International Journal of Intelligent Trans-
portation Systems Research, 18, 356–366.
Farag, W. (2020a). A lightweight vehicle detection and tracking tech-
nique for advanced driving assistance systems. Journal of Intel-
ligent & Fuzzy Systems, 39(3), 2693–2710.
Farag, W. (2020b). A comprehensive real-time road-lanes tracking
technique for autonomous driving. International Journal of Com-
puting and Digital Systems, 9(3), 349–362.
Farag, W. (2020c). Road-objects tracking for autonomous driving using
lidar and radar fusion. Journal of Electrical Engineering, 71(3),
138–149.
Farag, W. (2020). Complex-track following in real-time using model-
based predictive control. International Journal of Intelligent
Transportation Systems Research.
Farag, W. (2021). Real-time lidar and radar fusion for road-objects
detection and tracking. In Intelligent decision technologies (IDT),
IOS Press, The Netherlands (vol. 15) (in-press).
Farag, W. A., Quintana, V. H., & Lambert-Torres, G. (1997). Neuro-
fuzzy modeling of complex systems using genetic algorithms.
In IEEE international conference on neural networks (IEEE
ICNN’97) 1 (pp. 444–449).
Farag, W., & Saleh, Z. (2018). Road lane-lines detection in real-time
for advanced driving assistance systems. In International confer-
ence on innovation and intelligence for informatics, computing,
and technologies (3ICT’18), Bahrain, November 18–20.
Farag, W., & Saleh, Z. (2018). Behavior cloning for autonomous driv-
ing using convolutional neural networks. In international confer-
ence on innovation and intelligence for informatics, computing,
and technologies (3ICT’18), Bahrain, November 18–20.
Farag, W., & Saleh, Z. (2019a). An advanced vehicle detection and
tracking scheme for self-driving cars. In 2nd smart cities sym-
posium (SCS’19), IET Digital Library, Bahrain, March 24–26.
Farag, W., & Saleh, Z. (2019b). An advanced road-lanes finding
scheme for self-driving cars. In 2nd smart cities symposium
(SCS’19), IET Digital Library, Bahrain, March 24–26.
Fischler, M., & Bolles, R. (1981). Random sample consensus: A para-
digm for model fitting with applications to image analysis and
automated cartography. Communications of the ACM, 24(6),
381–395.
GCC C ++. (2020). Retrieved from, https ://gcc.gnu.org/, March 11,
2020.
Jo, K., Jo, Y., Suhr, J. K., Jung, H. G., & Sunwoo, M. (2015). Precise
localization of an autonomous car based on probabilistic noise
models of road surface marker features using multiple cameras.
IEEE Transactions on Intelligent Transportation Systems, 16(6),
3377–3392.
Kummerle, J., Sons, M., Poggenhans, F., Kuehner, T., Lauer, M., &
Stiller, C. (2019). Accurate and efficient self-localization on roads
using basic geometric primitives. In 2019 IEEE international con-
ference on robotics and automation, May 2019.
Lee, B.-H., Song, J.-H., Im, J.-H., Im, S.-H., Heo, M.-B., & Jee, G.-I.
(2015). GPS/DR error estimation for autonomous vehicle locali-
zation. Sensors, 15, 20779–20798. https ://doi.org/10.3390/s1508
20779 .
Levinson, J., & Thrun, S. (2010). Robust vehicle localization in urban
environments using probabilistic maps. In 2010 IEEE interna-
tional conference on robotics and automation, May 2010 (pp.
4372–4378).
Lu, F., & Milios, E. (1997). Robot pose estimation in unknown envi-
ronments by matching 2d range scans. Journal of Intelligent and
Robotic Systems, 18(3), 249–275.
Miura, S., Hsu, L. T., Chen, F., & Kamijo, S. (2015). GPS error correc-
tion with pseudorange evaluation using three-dimensional maps.
IEEE Transactions on Intelligent Transportation Systems, 16(6),
3104–3115.
Modsching, M., Kramer, R., & ten Hagen, K. (2006). Field trial on
GPS accuracy in a medium-size city: the influence of built-up.
In 3rd workshop on positioning, navigation and communication
(vol. 2006, pp. 209–218).
Nagiub, M., & Farag, W. (2013). Automatic selection of compiler
options using genetic techniques for embedded software design. In
IEEE 14th international symposium on computational intelligence
and informatics (CINTI), Budapest, Hungary, November 19, 2013.
Osterwood, C., & Noble, F. (2017). Localization for the next generation
of autonomous vehicles. Swift Navigation. Retrieved from, www.
swift nav.com, white paper.
Piché, R. (2016). Online tests of Kalman filter consistency. Interna-
tional Journal of Adaptive Control and Signal Processing, 30(1),
115–124.
Sander, J., Xu, X., Ester, M., & Kriegel, H.-P. (1996). A density-based
algorithm for discovering clusters in large spatial databases
with noise. In Proceedings of the 2nd international confernce
on knowledge discovery and data mining (pp. 226–231), August
1996.
Schaefer, A., Büscher, D., Vertens, J., Luft, L., & Burgard, W. (2019).
Long-term urban vehicle localization using pole landmarks
extracted from 3-D lidar scans. In European conference on mobile
robots (ECMR), Czech Republic, September 4–6, 2019.
Schubert, R., Richter, E., & Wanielik, G. (2008). Comparison and
evaluation of advanced motion models for vehicle tracking. In
11th international conference on information fusion, Cologne,
Germany.
Sefati, M., Daum, M., Sondermann, B., Kreisk, K. D., & Kampker, A.
(2017). Improving vehicle localization using semantic and pole-
like landmarks. In 2017 IEEE intelligent vehicles symposium (pp.
13–19), June 2017.
Suhr, J. K., Jang, J., Min, D., & Jung, H. G. (2017). Sensor fusion-
based low-cost vehicle localization system for complex urban
environments. IEEE Transaction on Intelligent Transportation
Systems, 18(5), 1078–1086.
Journal of Control, Automation and Electrical Systems
1 3
Thrun, S. (2002). Particle filters in robotics. In Proceedings of 18th
annual conference on uncertainty in AI (UAI), Edmonton, Canada.
Ubuntu Linux. (2020). Retrieved from, https ://www.ubunt u.com/,
March 11, 2020.
Wan, E. A., & Van Der Merwe, R. (2000). The unscented Kalman
filter for nonlinear estimation. In IEEE adaptive system for sig-
nal processing, communication, and control symposium, Alberta,
Canada.
Weng, L., Yang, M., Guo, L., Wang, B., & Wang, C. (2018). Pole-based
realtime localization for autonomous driving in congested urban
scenarios. In 2018 IEEE international conference on real-time
computing and robotics (pp. 96–101), August 2018.
Woo, A., Fidan, B., & Melek, W. W. (2019). Localization for autono-
mous driving. In R. Zekavat & R. Michael Buehrer (Eds.), Hand-
book of position location: Theory, practice, and advances (2nd
ed.). Hoboken: Wiley.
Yurtsever, E., Lambert, J., Carballo, A., & Takeda, K. (2020). A survey
of autonomous driving: Common practices and emerging tech-
nologies. Retrieved from, arXiv :1906.05113 v2 [cs.RO] January
6, 2020.
Zarchan, P., & Musoff, H. (2013). Fundamentals of Kalman filter-
ing: A practical approach. ISBN 978-1-62410-276-9 (4th ed.).
Reston, VA: American Institute of Aeronautics and Astronautics,
Incorporated.
Zhao, S., & Huang, B. (2017). On initialization of the Kalman filter.
In 6th international symposium on advanced control of industrial
processes (AdCONIP), Taipei, Taiwan, May 28–31.
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.