ArticlePDF Available

LiDAR for Object Detection in Self Driving Cars

Authors:

Abstract and Figures

Self-driving cars are the major invention in vehicular automation. These cars use sensors to perceive the surrounding and control them accordingly. Object detection becomes a major task in these driverless cars. In this study, utilization of LiDAR to automatically control speed, braking, and safety systems in response to sudden changes in traffic conditions is presented. The aim of this study is to do precise and quick object detection for LiDAR-based self-driving cars. This work proposes image processing to be used to identify and differentiate similar-looking objects so that carefully calculated driving decisions can be taken.
International Journal of Innovative Research in Engineering and Management (IJIREM)
ISSN (Online): 2350-0557, Volume-10, Issue-3, June 2023
https://doi.org/10.55524/ijirem.2023.10.3.19
Article ID IJIR2413, Pages 131-133
www.ijirem.org
Innovative Research Publication 131
LiDAR for Object Detection in Self Driving Cars
Nisha Charaya
Assistant Professor, Department Electronics and Communication Engineering, Amity University, Gurgaon, Haryana, India
Correspondence should be addressed to Nisha Charaya; charayanisha.1010@gmail.com
Copyright © 2023 Made Nisha Charaya. This is an open-access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT- Self driving cars are the major invention in
vehicular automation. These cars use sensors to perceive the
surrounding and control accordingly. Object detection
becomes a major task in these driverless cars. In this study,
utilization of LiDAR to automatically control speed,
braking, and safety systems in response to sudden changes
in traffic conditions is presented. The aim of this study is to
do precise and quick object detection for LiDAR based self-
driving cars. This work proposes image processing to be
used to identify and differentiate similar looking objects so
that carefully calculated driving decisions can be taken.
KEYWORDS- Autonomous Self-driving, Driverless,
LiDAR, Object classification, Object detection
I. INTRODUCTION
A self-driving car is capable of travelling without human
intervention. It is sometimes referred to as a robotic car,
driverless car, or autonomous car. These vehicles sense
their environment using sensors. Control systems analyze
sensory data to build a three-dimensional representation of
the environment around the vehicle. The vehicle then
determines a suitable navigation route based on the model
as well as approaches for navigating traffic restrictions
(such as stop signs) and barriers [1] [2].
These autonomous vehicles are expected to have an impact
on the auto business, as well as the health, welfare, urban
planning, traffic, insurance, labor market, and other
industries once the technology is more
developed. However, it may face few technological
obstacles as under:
In tumultuous inner-city settings, artificial intelligence
is still unable to perform as intended.
A car's computer could potentially be compromised, as
could a communication system between cars.
The ability to avoid large animals requires recognition
and tracking, and Volvo found that software designed
for caribou, deer, and elk was ineffective with
kangaroos. The susceptibility of the car's sensing and
navigation systems to different types of weather (such as
snow) or deliberate interference, including jamming and
spoofing. They would need to be able to revert to
sensible behaviors in situations where these maps might
be outdated.
Competition for the desired radio spectrum for
communications in cars.
Field programmability for the systems will require
careful evaluation of product development and the
component supply chain.
For automated cars to operate at their best, the current
road infrastructure may need to be modified.
Validation challenge of Automated Driving and need for
novel simulation-based approaches comprising digital
twins and agent-based traffic simulation [2] [3].
In addition to these, precise object detection becomes a
crucial task for relying on these cars.
There are many ways to gather the information from
surrounding to detect an object such as such as optical
and thermo-graphic cameras, radar, LiDAR, ultrasound/
sonar, GPS, odometry and inertial measurement units.
Among these LiDAR is a promising and precise
technology that can be utilized for object detection.
II. TRACKING METHODS
For autonomous driving to be accurate and effective, object
tracking is crucial. The identification of things from
photographs and vehicle sensor data, such as pedestrians,
autos, and other obstacles, is a crucial and challenging
interdisciplinary field. It incorporates contributions from
machine learning, signal processing, and/or computer
vision. The majority of processed sensor data comes in the
form of point clouds, pictures, or a combination of the two.
There are several ways to manage point cloud data, but the
most popular one is some kind of 3D grid where a voxel
engine is used to navigate the point space. When numerous
forms of sensor data are available, registration, point
matching, and image/point cloud fusion may be necessary.
The need to take into consideration temporal cues and
estimate motion from time-based frames makes this task
more challenging [4] [5].
Rarely does a single target appear in the scenes involved in
autonomous driving scenarios. The majority of the time,
several items must be simultaneously detected and tracked,
some of which may be moving with respect to the vehicle
and to other objects. As such, most approaches in the
related literature handle more than one object and are
therefore aimed at solving multiple object tracking
problems (MOT) [5].
A sequence of sensor data is available from one or multiple
vehicle-mounted acquisitions devices. Most related
methods involve assigning an ID or identifying a response
for all objects detected within a frame, and then attempting
to match the IDs across subsequent frames. Given that the
monitored objects may enter and exit the frame at various
timestamps, this is frequently a challenging process. They
might also be blocked from view by their surroundings or
even by one another. Additional problems may be caused
International Journal of Innovative Research in Engineering & Management (IJIREM)
Innovative Research Publication 132
by defects in the acquired images: noise, sampling or
compression artifacts, aliasing, or acquisition errors [4] [5].
The most prevalent need for automatic driving object
monitoring is real-time video. Thus, in addition to
individual object recognition, the goal is to link monitored
objects across numerous video frames. Additional
difficulties arise when accounting for variations in motion,
such as when objects are subject to rotation or scaling
transformations or when their relative movement speeds are
fast [5].
Images are typically the primary mode for interpreting the
scene. As a result, 2D MOT is the focus of many efforts in
the associated literature. These techniques rely on a series
of detection and tracking phases, in which successive
detections that have the same classification are connected to
establish trajectories. The inherent existence of noise in the
captured photos poses a substantial difficulty because it
may negatively alter the attributes of identical objects
across consecutive frames. Therefore, computing robust
features is a crucial component of object detection. Color,
frequency and distribution, shape, geometry, contours, or
relationships within segmented objects are just a few
examples of the many different object qualities that features
might represent. The most widely used feature detection
techniques nowadays use supervised learning. Machine
learning algorithms are used to gradually refine features,
which initially start off as collections of random numbers.
Such methods call for the careful selection of hyper-
parameters and the use of appropriate training data,
frequently discovered through trial and error. The best
results, however, in terms of accuracy and robustness to
affine transformations, occlusion, and noise are provided by
supervised classification and regression approaches,
according to a number of findings from the associated
literature [5].
III. PROPOSED METHOD
LiDAR combined with image processing can give precise
results for object detection. Further it can be combined with
speed-distance measurements so that the control system can
take decisions accordingly [6].
The term "light detection and ranging," or LiDAR, is
frequently used to refer to a type of sonar that employs
pulsed laser beams to measure the distance to nearby
objects. As LiDAR systems have shrunk in size, more uses
have emerged that make use of the technology's
adaptability, accuracy, and record-breakingly quick data
collection. Most notably, carmakers are leveraging LiDAR
capabilities as a key component in their race to develop
safe, self-driving vehicles [7] [8].
Both ADAS (Advanced Driver Assistance Systems) and
autonomous vehicles use LiDAR because its sensors enable
accurate, trustworthy navigation during real-time
autonomous operation on highways and in urban areas. In
order to help vehicles safely move at varied speeds, they
can identify and track other vehicles, pedestrians, and other
objects. This includes traveling night and day in a range of
road conditions such as rain, sleet, and snow. However, it
can’t differentiate between a plastic bag and an obstacle; a
cyclist giving hand signal to change lane or a rod projecting
out from a pillar and many more which may affect the
control of autonomous cars. This raises the need for a
precise object detection based on inputs obtained from
LiDAR sensors so that similar appearing objects can be
differentiated [9] [10].
To overcome these issues, object detection techniques of
image processing can be combined with LiDAR based
autonomous cars in such a way that different cloud points
obtained from these sensors are clustered together to form a
recognizable shape as shown in figure 1.
Figure 1: Clustering of Cloud Points [10]
Thereafter, image classification techniques are used to
identify these shapes as objects as shown in figure 2.
Figure 2: Classification of Shapes formed [11]
Thereafter modeling of classified objects is done to predict
all possible movements with speed as shown in figure 3.
Figure 3: Modeling of Classified Objects [11]
International Journal of Innovative Research in Engineering & Management (IJIREM)
Innovative Research Publication 133
IV. OBJECT CLASSIFICATION
Finding objects in point cloud data is the initial step in
recognizing and classifying them using a variety of
techniques and algorithms. After converting raw data into a
point cloud structure, one of the first steps is the point
clustering or segmentation, which basically consists in
grouping points based on common characteristics. After this
step, redundant data can be removed from the point cloud,
resulting in less data to be transferred and processed in the
upcoming phases. Some methods begin by categorizing the
point cloud into background and foreground data in
applications where the sensor maintains a stationary
position. Because they don't depict dynamic objects, points
that are in the same place in several frames are disregarded
as backdrop. For the remaining points (foreground), the
distance between points is measured, and points close to
each other are clustered and marked with a bounding box as
they possibly represent an object. However, when the
sensor moves with the car, these approaches are not
effective, as the background and objects move together
inside the point cloud. Therefore, automotive approaches
require robust and faster algorithms since the objects in the
point cloud also change at higher frequencies. Initial
approaches used sliding windows algorithms with Support
Vector Machine (SVM) classifiers and hand-crafted
features for object detection, but were quickly replaced by
newer superior techniques such as 2D representations,
volumetric-based, and raw point-based data, which deploy
machine learning techniques in the perception system of the
vehicle .These classification /object detection results along
with LiDAR based distance and speed measurements can
help in a more accurate control of self-driving
cars[12][13][14].
V. CONCLUSION
Autonomous cars are a big milestone in automation of
vehicles. Cars with advanced driver assistance that provide
numerous features such as cruise control are already there
in market which aids in comfortable driving. However,
switching to autonomous cars is still far ahead because of
its technological challenges. LiDAR combined with suitable
algorithm for object classification technique can be a
potential approach for object detection.
REFERENCES
[1] S. I. N. Y. H. D. A. N. S. karkra, N. charya, “Unmanned
Ground Vehicle”, IJRITCC, vol. 5, no. 5, pp. 773775, May
2017.
[2] Xie, S.; Hu, J.; Bhowmick, P.; Ding, Z.; Arvin, F.,
"Distributed Motion Planning for Safe Autonomous Vehicle
Overtaking via Artificial Potential Field" IEEE Transactions
on Intelligent Transportation Systems, 2022.
[3] ^ Gehrig, Stefan K.; Stein, Fridtjof J. (1999). Dead
reckoning and cartography using stereo vision for an
automated car. IEEE/RSJ International Conference on
Intelligent Robots and Systems. Vol. 3. Kyongju. pp. 1507
1512. doi:10.1109/IROS.1999.811692. ISBN 0-7803-5184-
3.
[4] https://www.labellerr.com/blog/object-tracking-in-
autonomous-vehicles-how-it-works/
[5] https://encyclopedia.pub/entry/8270
[6] Alam Bhuiyan, Ifte Khairul. (2017). LiDAR Sensor for
Autonomous Vehicle. 10.13140/RG.2.2.16982.34887/1.
[7] Shahian-Jahromi Babak et al 2017 IOP Conf. Ser.: Mater.
Sci. Eng. 224 012029
[8] https://www.automotiveworld.com/articles/LiDARs-for-self-
driving-vehicles-a-technological-arms-race/
[9] https://www.geospatialworld.net/prime/business-and-
industry-trends/LiDAR-the-driving-technology-for-
autonomous-and-semi-autonomous-mobility/
[10] https://www.csail.mit.edu/news/more-efficient-LiDAR-
sensing-self-driving-cars
[11] https://www.ff.com/us/futuresight/what-is-LiDAR/
[12] Arnold, E.; Al-Jarrah, O.Y.; Dianati, M.; Fallah, S.; Oxtoby,
D.; Mouzakitis, A. A Survey on 3D Object Detection
Methods for Autonomous Driving Applications. IEEE
Trans. Intell. Transp. Syst. 2019, 20, 37823795.
[13] Shi, S.; Wang, X.; Li, H. PointRCNN: 3D Object Proposal
Generation and Detection from Point Cloud. In Proceedings
of the 2019 IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), Long Beach, CA, USA, 1620
June 2019; pp. 770779.
[14] Wu, J.; Xu, H.; Tian, Y.; Pi, R.; Yue, R. Vehicle Detection
under Adverse Weather from Roadside LiDAR Data.
Sensors 2020, 20, 3433.
ABOUT THE AUTHOR
Dr. Nisha Charaya is working as an
Assistant Professor in Department of
Electronics and Communication
Engineering, Amity University Haryana.
Her research areas include signal
processing, image processing, biometrics
and machine learning. She has over 13
years of experience in academics and has
authored numerous research papers in
conferences and journals.
...  Datasets and Benchmarking:-COCO (Common Objects in Context) object orders and extensively used dataset for object discovery, containing different reflections for training and evaluation. PASCAL VOC (Visual Object Classes) Another standard dataset with annotated images across colorful object classes, easing performance comparison and model evaluation [4]. ...
... Alternatively, you can build a custom model. [4] 6. Training: -Split your dataset into training, validation, and test sets. Train the model on the training data, using appropriate loss functions and optimization algorithms. ...
Article
Artificial Intelligence is the arising field of computer wisdom which is nearly associated to logic, logical answering analogous to that of the humans but in important effective and faster way. On the other hand, cameras are used for colourful other purpose like for security purposes etc. When similar cameras and the artificial intelligence along with important some languages like python are integrated together it becomes easier to reuse the data and cover it with important perfection and delicacy. This technology works without any mortal intervention. It means that if there are many people and further no of places to cover also we can use this technology to cover the camera with utmost perfection.
... Evaluate the effectiveness and comparative advantages of integrated LiDAR and ultrasonic sensor systems for enhancing safety measures, object detection, and localization capabilities in autonomous electric vehicles, while considering real-world deployment feasibility, cost-effectiveness, scalability, and regulatory compliance. [10] 3. MATERIALS AND METHODS Figure 3: The three-layered architecture ...
Article
Full-text available
Autonomous driving of multi-lane vehicle platoons have attracted significant attention in recent years due to their potential to enhance the traffic-carrying capacity of the roads and produce better safety for drivers and passengers. This paper proposes a distributed motion planning algorithm to ensure safe overtaking of autonomous vehicles in a dynamic environment using the Artificial Potential Field method. Unlike the conventional overtaking techniques, autonomous driving strategies can be used to implement safe overtaking via formation control of unmanned vehicles in a complex vehicle platoon in the presence of human-operated vehicles. Firstly, we formulate the overtaking problem of a group of autonomous vehicles into a multi-target tracking problem, where the targets are dynamic. To model a multi-vehicle system consisting of both autonomous and human-operated vehicles, we introduce the notion of velocity difference potential field and acceleration difference potential field. We then analyze the stability of the multi-lane vehicle platoon and propose an optimization-based algorithm for solving the overtaking problem by placing a dynamic target in the traditional artificial potential field. A simulation case study has been performed to verify the feasibility and effectiveness of the proposed distributed motion control strategy for safe overtaking in a multi-lane vehicle platoon.
Article
Full-text available
Roadside light detection and ranging (LiDAR) is an emerging traffic data collection device and has recently been deployed in different transportation areas. The current data processing algorithms for roadside LiDAR are usually developed assuming normal weather conditions. Adverse weather conditions, such as windy and snowy conditions, could be challenges for data processing. This paper examines the performance of the state-of-the-art data processing algorithms developed for roadside LiDAR under adverse weather and then composed an improved background filtering and object clustering method in order to process the roadside LiDAR data, which was proven to perform better under windy and snowy weather. The testing results showed that the accuracy of the background filtering and point clustering was greatly improved compared to the state-of-the-art methods. With this new approach, vehicles can be identified with relatively high accuracy under windy and snowy weather.
Technical Report
Full-text available
Automotive Radar and LiDAR sensors represent the key components for next generation driver assistance functions (Jones, 2001). In contrast, LiDAR sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their widespread usage on mass markets. Velodyne's vision for its LiDAR technology is simple: to market it wherever sophisticated 3D understanding or visualization of the environment is required. High-definition LiDAR has applications in robotics, map capturing, surveying, autonomous navigation, security, manufacturing and automotive safety systems. Advanced driver assistance systems ADAS, where LiDAR is a key solution, do not only provide a more comfortable driving experience but also able to reduce the severity of accidents or even prevent crashes entirely.
Article
An autonomous vehicle (AV) requires an accurate perception of its surrounding environment to operate reliably. The perception system of an AV, which normally employs machine learning (e.g., deep learning), transforms sensory data into semantic information that enables autonomous driving. Object detection is a fundamental function of this perception system, which has been tackled by several works, most of them using 2D detection methods. However, the 2D methods do not provide depth information, which is required for driving tasks, such as path planning, collision avoidance, and so on. Alternatively, the 3D object detection methods introduce a third dimension that reveals more detailed object's size and location information. Nonetheless, the detection accuracy of such methods needs to be improved. To the best of our knowledge, this is the first survey on 3D object detection methods used for autonomous driving applications. This paper presents an overview of 3D object detection methods and prevalently used sensors and datasets in AVs. It then discusses and categorizes the recent works based on sensors modalities into monocular, point cloud-based, and fusion methods. We then summarize the results of the surveyed works and identify the research gaps and future research directions.
Unmanned Ground Vehicle
S. I. N. Y. H. D. A. N. S. karkra, N. charya, "Unmanned Ground Vehicle", IJRITCC, vol. 5, no. 5, pp. 773-775, May 2017.
  • Shahian-Jahromi Babak
Shahian-Jahromi Babak et al 2017 IOP Conf. Ser.: Mater. Sci. Eng. 224 012029