Conference PaperPDF Available

A Deep Learning Based Autonomous Electric Vehicle on Unstructured Road Conditions

Authors:

Abstract and Figures

Autonomous driving vehicles are too known as driver-less cars which is one of the foremost astounding advances of the twenty-first century, anticipated to be driver-less, effective, and crash dodging ideal urban cars of the future. Autonomous cars actually sense the environment, navigate and fulfill human transportation capabilities without any human inclusion. Cameras, radar, lidar, GPS, and navigational pathways help this type of vehicle detect its surroundings. Even when the conditions alter, advanced control systems interpret sensory data to maintain their locations. Autonomous vehicles are on their way to completely replacing the world’s transportation system. To reach this goal, automobile industries have begun working in this zone to realize the potential and unravel the challenges as of now. A few companies have also started their trail. It will aid in reducing traffic, reducing pollution, avoiding maximum accidents, saving time, conserving energy, and improving human safety. As a result, with the aim and vision of eradicating these challenges from our country, we are focusing on an independent car that will assist us in saving ourselves from the daily revelations we generally confront on the road. Besides, it is high time we began working in Bangladesh on a driver-less vehicle
Content may be subject to copyright.
A Deep Learning Based Autonomous Electric
Vehicle on Unstructured Road Conditions
Ashik Adnan
Department of Computer Science and
Engineering
BRAC University
Dhaka, Bangladesh
ashik.adnan@g.bracu.ac.bd
Mahfuza Sultana Mim
Department of Computer Science and
Engineering
BRAC University
Dhaka, Bangladesh
mahfuza.sultana.mim@g.bracu.ac.bd
G M Mahbubur Rahman
Department of Computer Science and
Engineering
BRAC University
Dhaka, Bangladesh
g.m.mahbubur.rahman@g.bracu.ac.bd
Md. Khalilur Rahman
Department of Computer Science and
Engineering
BRAC University
Dhaka, Bangladesh
khalilur@bracu.ac.bd
Md. Mahafuj Hossain
Department of Computer Science and
Engineering
BRAC University
Dhaka, Bangladesh
md.mahafuj.hossain@g.bracu.ac.bd
Abstract—Autonomous driving vehicles are too known as
driver-less cars which is one of the foremost astounding advances
of the twenty-first century, anticipated to be driver-less, effective,
and crash dodging ideal urban cars of the future. Autonomous
cars actually sense the environment, navigate and fulfill human
transportation capabilities without any human inclusion. Cam-
eras, radar, lidar, GPS, and navigational pathways help this type
of vehicle detect its surroundings. Even when the conditions alter,
advanced control systems interpret sensory data to maintain their
locations. Autonomous vehicles are on their way to completely
replacing the world’s transportation system. To reach this goal
automobile industries have begun working in this zone to realize
the potential and unravel the challenges as of now. A few
companies have also started their trail. It will aid in reducing
traffic, reducing pollution, avoiding maximum accidents, saving
time, conserving energy, and improving human safety. As a result,
with the aim and vision of eradicating these challenges from our
country, we are focusing on an independent car that will assist
us in saving ourselves from the daily revelations we generally
confront on the road. Besides, it is high time we began working
in Bangladesh on a driver-less vehicle
Index Terms—Autonomous Vehicle; Machine Learning; Elec-
tric Vehicle; Artificial Intelligence; Collision Avoidance; Road
Safety; Lane Detection; Vehicle Detection; Sensors
I. INTRODUCTION
Every day, thousands of people are killed or injured in
road accidents around the Bangladesh. The majority of these
collisions are the result of driver error. As a result, countless
motorists, commuters, and onlookers are injured as a result
with no mistake from their side. Instead, they are forced to
deal with unpleasant and costly damages that might have been
prevented if proper vehicle safety precautions been adopted. In
Bangladesh, most drivers are irresponsible behind the wheel.
Some of them drive for extended periods of time, such as 12
to 18 hours every day. Even those who drive long distances
may spend more than 24 hours on the road. The majority
of local bus or city bus drivers in Bangladesh drive after
drinking. Even if they take additional opioids, they remain
psychologically unstable, which leads to frequent road acci-
dents. Conventional automobiles, on the other hand, consume a
tremendous quantity of gasoline. Internal combustion engines
emit massive amounts of heat and exhaust fumes during idle
time, contributing to global warming and air pollution. To
address these difficulties, we will build an electric vehicle from
the ground up and make it self-driving using deep learning
techniques.
II. RE LATE D WORK
”Framing Machine Driving Decisions” is a challenge Cun-
neen et al [1] are taking on. For the first time, autonomous
cars in this article feature socially embedded artificial in-
telligence forms that can make complicated risk mitigation
decisions with real-world repercussions. Considering that AI
decisions are fundamentally different from human decision-
making processes, this paper examines the flaws in the current
discussion framing and asks how AI weighs decisions, how we
should regulate these judgments, and what such conclusions
signify in respect to other people’s decisions. When it comes
to safety concerns and AI’s role as a moral agent, it provides
a lot of options for moving forward. According to Bennajeh
et al [2], perception is a primary concern in the operation
of a driving unit, where it is important to grasp each of the
data, occurrences, and activities affects the condition of the
environment and the driver’s intentions, both now and in the
near future. In this frame of reference, they present a five-
layer driving framework that ensures the self determination
and road safety of a driver agent. Cugurullo et al [3] shed
light on driverless automobiles, city planning, and human
settlement. As indicated in this paper artificial intelligence-
driven autonomous cars are progressively being incorporated
into city transportation systems, affecting both design and
stability. This research illuminates the urban transformation
to driverless transportation in three ways. An understanding
of the diffusion of autonomous vehicles in cities is based on
three interrelated factors: community perceptions, technology
advancements, and urban governance. According to He et al
[4], in an emergency, autonomous automobiles’ main priorities
are accident prevention and stability. To avoid a crash while
stabilizing driverless cars in actual driving circumstances at
handling limits, significant actuator inputs with a strong dy-
namic tyre turning sensitivity are required. A growing interest
in when and how driverless car technologies will impact
transportation, ways of life and the surrounding structures has
prompted Saghir et al [5]. Despite the potential for consider-
able change, urban and transportation planners must fully inte-
grate driverless cars into their strategic planning. Researchers
found organizers’ comprehension and cooperation regarding
the potential implications of these producing technologies. To
fully take advantage of driverless cars, administrators must
be more creative in include them in their planning. Self-
driving cars are becoming a hot topic in both academia
and industry, according to Cao et al [6] Intermittent lane-
changing may be an important element of autonomous driving
on major roadways. Many studies have looked into driverless
vehicles’ lane shifts. The research on helping autonomous
cars make mandatory lane changes (MLCs) is rare. With this
research, we hope to find the best place to teach MLC to
self-driving To determine the optimal place to issue a lane
change command using automotive route systems, this article
describes an optimization demonstration. First, an exponential
distribution is used to describe the time spent awaiting for just
a safe gap to change lanes. Then, using traffic shock concept
and horizontal queuing theory, lane-specific journey times
are estimated for automobiles in different conditions. Finally,
the predicted travel time for just an automobile obtaining a
lane - changing directive is calculated. The proposed model’s
advice can save vehicles considerable time in situations where
traffic density or travel speed is higher. The developed model
can assist self-driving cars find the best way. As stated in
Wigley et al [7] connected and autonomous cars (CAVs) are
assured to convert mobility, makes transportation available to
all even those incapable of driving due age, affordability,
or disability. An analysis of limited time CAV visualizations
as well as the ramping up of technologies from modest tests
to large roll-out is presented in this research. It examines a
large collection of images from a CAV experiment in a Part
of the United Kingdom and shows how they reinforce rather
than challenge gendered mass transit associations. Also, this
study highlights other ways in which CAV-enabled network
portability visualizations highlight existing network capital
imbalances. It also considers the metropolitan context in which
CAVS are intended. The absence of humans and specificity
enables CAV technology to be envisioned in various locations
and circumstances, the report argues. Thus, the software
and technology underpinning the driver-less car appear to
be transferable to, and advantageous in, other situations and
conditions. A new method for analyzing the robustness of
universal dynamic route algorithms is presented by Hederia
et al [8]. Interpretative process causes these severe delays
in driverless driving. A simple solution to the subjective
characteristic polynomial that arises whenever the delay time
is examined eliminates the problem. The research focused
on straight and curved pathways. An important path-tracking
algorithm, pure pursuit, has been linked to the method. Despite
the difficulty of actual trials with actual automobiles near
the stability limits, the research describes several testing with
two varied exterior automated driving (ROMEO-3R and a
computer operated HMMWV). With two separate outdoor ve-
hicles, these experiments showed how basic model predictions
of the reliability of recommended techniques are confirmed in
reality. According to Marchet et al [9], materials management
vendors are constantly producing underused arrangements
due to increased competition and smaller order volumes. In
recent years, automated vehicle storage and retrieval systems
(AVS/RS) have been developed for unitized storage and ac-
cess. The research gives a clear plan and simulates one of
most planned exchange for this underutilized configuration.
Based on recent AVS/RS implementation data, the proposed
system’s application is provided, as well as the fundamental
design differences among tier-captive and tier-to-tier AVS/RS
arrangements. Based on the research of Faisal et al [10],
driverless cars represent the future of transportation systems.
The academic, public, and private industry have all raised their
interest to this modern driving breakthrough. The scope of AV
research is a major impediment to clear understanding. A lot
of literature is generated in numerous fields. That is why this
paper aims to outline the study on AV for a greater grasp of
the trends, themes, and linkages. A rail-guided vehicle moves
in rectilinear pathways between and within the pathways
of units stacking capable racks, according to Malmborg et
al [11]. Lifts installed along the racks perimeter facilitate
upward vehicle growth. It’s an alternative to automatic storing
and retrieval. This solution enables consumers to fit vehicle
fleet size and lift count to storage system transaction levels.
It is also offered to model projected implementation as a
function of essential system characteristics such as storage
capability, racking layout, and fleet measurement. Millar et
al [12] illustrate how to computerize growing amounts of
intelligent decision making so our technologies can run effi-
ciently. But automate intelligent decision making presents new
difficulties for developers and designers. The author examines
this and many other ethical concerns raised by automated
intelligent decision making. He outlines a design tool that
may be incorporated into the development project to help
professionals, planners, ethicists, and legislators to decide how
best to automate specific sorts of intelligent decision making.
According to Montanaro et al [13], linked autonomous vehi-
cles can help improve traffic, driving safety, excessive fuel use,
and reduced pollution. In order to improve mobility, linked self
driving cars use network connectivity to provide collaborative
features, such as cooperative sensing and coordination. Cui
et al [14] proposed an essential steering assist (ESA) system
that combines automated steering and differential brakes. The
controller is developed to balance the conflicting goals of
vehicle stability and rear-end accident avoidance. The entire
collision avoidance method is broken into two stages: guiding
the car into the neighboring lane and guiding it along the
adjacent lane’s center line. The differential stopping system is
also developed to address the needs of collision prevention
and vehicle stabilization. Raviteja et al [15], look at the
technologies and methods employed in self driving cars. This
transformation has seen several tactics from radio operated
automation systems to sensor driven, neural network driven
automation. This paper also discusses dead reckoning and
perception. Using GPS, dead reckoning measures current
position using formerly stored information. High Precision
data are acquired using RTK and DGPS devices. The external
world is read by the perception system. Cameras, Radar, Lidar,
and ultrasonic sensors are indeed used. According to Koike et
al [16], a network infrastructure for self driving cars can be
used to architectonic. In the scenario, automotive navigation
technologies are among the most significant components that
integrate judgments and help anticipate a car’s future position.
The automobile obtains broadcast data in the cloud, and the
occupants retrieve videos to keep them entertained while the
driver concentrates on the road. The first application is for
navigation systems that can use cloud data to make decisions.
III. SYS TE M OVE RVIEW
We have divided our work into four sections to make
it easy. They are Mechanical design and implementation,
Electronics, Control and Autonomy, Power. Major portions
of the mechanical body, electronics, and control parts were
done previously. Now we need to upgrade them and install
the necessary sensors and cameras. And our main focus is
to ready the algorithms and make the car autonomous ready.
We are trying to find a reliable solution to every problem by
undergoing an organized component research procedure.
The mechanical section mainly focused on the skeleton of
the car. In general, a car contains some mechanical parts like
Chassis, Suspension system, steering system, Wheel, Drive
engine. As we are building an electric automobile, instead
of a mechanical engine, we are using an electric motor.
After mathematically observing a couple of chassis structure
mechanisms, we ended up selecting the combination of ladder
and tube space frame design with a proper orientation of
triangulation on it. The reason behind choosing the system
is that the unique space frame transmits the flexing load
as tension and compression loads along the length of each
strut. We also observed a lot of suspension systems along
with designing some unique suspension for our car. From
our observation, one of the best solutions can be the double
wishbone suspension system with independent movement of
each wheel. In the electronics section, we mainly focused on
the drive mechanism. We used a MOSFET based motor driver
and PWM controller(throttle) to drive the vehicle. We also
used 8 12Ah dry cell batteries to power the vehicle.
To automate the vehicle, the team wants to focus on the
edge microcontroller units, which is STM32 NUCLEO. We
want to process the sensor raw data from the edge and send it
to the central processing unit to use on the AI processing. As
STM32 NUCLEO has lower latency than other MCU’s it will
process the real-time sensor data and send it to the controller.
We chose the Raspberry Pi 4 Developer Kit as the central
controller unit. This is a compact, multicore processor capable
of scheduling the execution of numerous neural networks for
image classification, object identification, segmentation, and
speech synthesis. This enables us to analyze camera images,
map them, and do localization using multiple Neural networks.
We implemented a step further of traditional lane detection,
which is a popular Convolutional Neural Network (CNN).
Our approach model will be detecting lanes based on instance
segmentation and will be robust and more efficient by using
edge-cloud computing. This model will identify dynamically
the previous lanes and current lanes. Anticipating human be-
havior is one of the most challenging and concerning problems
in autonomous driving systems. Now our car will be trying to
detect human behavior by motion prediction and implementing
a very sophisticated emergency braking system called AB.
Finally, we implemented computer vision to know the road
context. For example, In a crosswalk, sidewalks there are very
likely to be persons standing or walking, and our system will
give priority here to predict behavior. Finally, using the Google
Geolocation API, we extracted coordinates of starting and
ending points. Also, it will get every node’s coordinates of
its path. The Geolocation API primarily delivers a location
and accuracy radius based on data from cell towers and Wi-
Fi nodes detected by the mobile client. This will allow us to
arrive at our destination on our own.
Fig. 1. Workflow Diagram
IV. DATA SO UR CE
Data we have collected are basically primary data. But some
secondary data have also been used. We drove on the road near
BRAC University, Hatirjheel and recorded video of road lanes
and vehicles on the road. Our supervisor, Dr. Md. Khalilur
Rahman also collected primary data while he was driving on
Dhaka - Rajshahi highway. We also have collected nighttime
driving video footage from our seniors.
A. Primary data
We obtained primary data using the camera of our vehicle,
and some video footage was provided by our supervisor. To
begin, we used our own photographs and recorded live video
for the parking system using canny edge detection. We used
the videos to recognize road lanes on urban streets. The data
and output from LIDAR included some primary data that we
recorded and detected in the surrounding environment.
B. Secondary data
The secondary data were collected mostly from Google
and kaggle. The road signs that were used are some of the
secondary data which were collected from Google. We used
some pre-recorded data from kaggle for road lane detection.
V. DATA ACQU IS IT IO N
To drive a car autonomously, it needs to input a very large
amount of data from various sensors. The main challenge for
the researchers is processing the data, making decisions and
sending commands to output devices like a wheel, steering
according to the data. Even this is a place for very high
precision because a single point of failure can cause very
devastating accidents. So, we are using precise sensors and
trying to optimize our codes to eradicate errors.
Figure 2 shows the sensor installed on the car. On top of the
car, LIDAR is installed. Two cameras have been positioned on
the windshield of the car. Six sonar sensors have been installed
on the car. The boat side has two sensors, and the front and
rear portion of the car have two sensors.
Fig. 2. Position of the Sensors
VI. SE NS OR BA SE D APP ROAC H
Our 3D mapping device is the RPLIDAR A1. It is a laser
scanner that scans in all directions. This device was used to
identify obstructions across all side of the car. When driving
forward, it can detect obstructions in its path and defect
vehicles to the next lane. It can also determine which car is
quite near to the back end. Python was used to connect the
LIDAR with an onboard central processor, and matplotlib was
used to create the 3D display.
Fig. 3. Raw data from LIDAR
Fig. 4. Offline Map
For mapping, we have used Navit which is an open source
navigation system with GPS tracking. Navit basically works
with the value of GPS and compass. We have used an
Ublox NEO-M8N GPS and Compass module to get Latitude,
Longitude and Compass values. It calculates the optimal route
destination and generated direction using GPS tracking.
VII. ALGORITHM BAS ED AP PROACH
A. Edge detection and vehicle detection
Canny Edge Detection is a method for detecting edges
in images. It takes grayscale as input and employs an inter
technique. To create the parking system, OpenCV canny edge
detection is being used to detect the car’s edges. The canny
library takes raw pictures and video input and turns it to
grayscale, including edges. As well as Haar Cascade classifiers
are an effective way for object detection. We used the Haar
Cascade classifiers to detect the vehicles around the parking
spot to determine their distance between the vehicles and to
calculate whether the spot has enough spots to park a vehicle.
We used detectMultiScale() to detect the object and measure
their distance by extracting their pixel value of the detected
object box which is not one hundred percent accurate, but
then we used Corner detection with Harris Corner Detection
method using OpenCV to point out the corners of the vehicles.
Fig. 5. Vehicle detection using Haar Cascade classifiers
The figure shows the output of both gray scale and raw
image after implementing the Harris Corner detection method.
The output shows the corner points of the vehicles which
are nearly similar for vehicles with which the corner of a
vehicle and the distance between them can be easily and more
efficiently extracted.
B. Road sign detection
In the system that was used to detect the road signs, first
we needed to identify the Hough circles to detect the road
sign in the real time video. Then, after identifying the Hough
circle, it is divided into three parts. After dividing it into four
parts, the initial part was identifying the stop sign for which
the sign tends to be red. So we used the dominant color and
if it’s more than 100 then it’s red which indicates its most
probable stop sign. If it’s not more than 100 then we go to
the square section to look for directions. And if we look at
the direction signs we can use the dominant colors which are
less than 100 to identify. For example, we check the zones for
dominant colors and according to the various combinations of
the zones we can identify the road signs.
For the go forward sign, we can check for the dominant
color in zone 1 more than zone 0 and zone 2. If the conditions
are met, then the output is forward. Again, if the condition is
not met, then we check if zone 0 dominant color is greater
than zone 2. If that satisfies then the sign is forward left and
if they are not true then the sign is forward right. These are
the basic direction road signs to detect and using openCV we
are using real time footage to detect the road sign.
If we look at the code we can see that, we are using cv2 for
openCV and we are using numpy and itemfreq. We activated
the camera and set it up for real time streaming . openCv is a
system to use a camera in a system and extract data from the
frames or the video or the picture. openCv is a library which
is cross platform and used for various features in computer
vision systems.
C. Road lanes detection using Keras model
Previously we used OpenCV for road lane detection, but
now we have diverted from OpenCV to Deep Learning for
detecting road lanes for several reasons. Foremost in our
OpenCV based approach we did not get expected accuracy
on Bangladeshi road conditions. We gave our test data to the
model and the road lanes were not even detected properly as
the road lanes on Bangladeshi roads are not clearly drawn.
Both are open-source python libraries, and both have pros and
cons over one. Previously we have tried OpenCV for road lane
detection, but after trying Keras for detecting lanes we found
significant change over OpenCV. So we moved to CNN based
approach and used Tensorflow, Keras to detect road lanes more
precisely. Keras performs fast and efficiently over OpenCV
comparatively. We chose Keras over OpenCV based on some
advantages like Keras is performing faster and efficiently than
OpenCV in our task, Keras has User-friendliness, extensible,
modular, evaluation, and prediction features.We have collected
both daytime road data and nighttime also. Now we get better
detection on Bangladeshi roads.
Fig. 6. Road Lane detection (Night)
Fig. 7. Road Lane detection (Day)
VIII. RES ULT ANA LYSI S AN D DISCUSSION
To discuss the end result, let’s talk about the mapping
first. The LIDAR using laser reflection time interval gives
us the surrounding mapping within a short range. This sys-
tem tested for high accuracy obstacle detection and distance
measuring on the road to avoid dense traffic and any kind
of collisions. On the other hand, the Navit and Ublox NEO-
M8N mapping system gives us the routing and direction of
the route the vehicle travels without any internet connection.
The lane detection system with keras detects the road lanes,
identifying them with a Green zone. It was tested both in
urban (unstructured) road and city (structured) road. In urban
areas, the roads have less visible road lanes for which the
system shows errors sometimes by not identifying the accurate
lane direction because of dead zone and improper road lane
identifiers and sometimes the zone extends outside the road
lane. But for city roads where the road lanes are clear and
visible, the system identifies the road-lane more accurately and
the identified lane zone in a very well manure. The road-sign
are almost accurately identifies as soon as they are in visible
ares to the camera because of the accuracy of the Hough circles
and Hough parameter maximas. The error rate is significantly
low and identification is very fast due to openCv. The parking
system helps the system to identify if the parking area the
vehicle is going to park is vacant or unavailable. it identifies
the zone with canny edge and Haar cascade classifier to check
the distance and empty space between already parked vehicles,
parking space or obstacle to the zone it is going to park. The
system detects the vehicles and identifies the free zone after
extracting continuous single frame of photos from the camera.
IX. CONCLUSION
Sensors, actuators, complicated algorithms, machine learn-
ing systems, and powerful computers are all used by au-
tonomous cars to carry out their functions. The electronic
control unit processes the signals received from the various
sensors to make judgments and develop algorithms. As AVs
run entirely on their own and are nothing more than intel-
ligently constructed systems, security is a major worry for
drivers of autonomous vehicles. It is possible to take control
of this system remotely, here all the autonomous vehicle man-
ufacturers must be concerned and strict. In this modern world,
autonomous vehicles are a need for a safer transportation
system and to get rid of tedious driving. This monotonous
driving kills millions of lives. So this is an absolute problem
to solve using appropriate cutting edge technology to make
people’s lives secure.
REFERENCES
[1] C. Malmborg, “Conceptualizing tools for autonomous vehicle stor-
age and re-trieval systems,”International Journal of Production Re-
search - INT J PRODRES, vol. 40, pp. 1807–1822, May 2002.doi:
10.1080/00207540110118668.
[2] G. Heredia and A. Ollero, “Stability of autonomous vehicle path tracking
withpure delays in the control loop,”Advanced Robotics, vol. 21, pp.
23–50, Jan.2007.doi: 10.1163/156855307779293715.
[3] G. Marchet, M. Melacini, S. Perotti, and E. Tappia, “Development of a
frame-work for the design of autonomous vehicle storage and retrieval
systems,”In-ternational Journal of Production Research, vol. 51, Jul.
2013.doi: 10.1080/00207543.2013.778430.
[4] J. Millar, “An ethics evaluation tool for automating ethical decision-
makingin robots and self-driving cars,”Applied Artificial Intelligence,
vol. 30, no. 8,pp. 787–809, 2016.doi: 10.1080/08839514.2016.1229919.
eprint:https://doi.org/10.1080/08839514.2016.1229919. [Online]. Avail-
able:https://doi.org/10.1080/08839514.2016.1229919.
[5] P. Cao, Y. Hu, T. Miwa, T. Morikawa, and X. Liu, “An optimal
mandatorylane change decision model for autonomous vehicles in urban
arterials,”Jour-nal of Intelligent Transportation Systems, vol. 21, Apr.
2017.doi: 10.1080/15472450.2017.1315805.
[6] Q. Cui, R. Ding, X. Wu, and B. Zhou, “A new strategy for rear-end
col-lision avoidance via autonomous steering and differential braking
in high-way driving,”Vehicle System Dynamics, vol. 58, pp. 1–32, Apr.
2019.doi:10.1080/00423114.2019.1602732.
[7] M. Cunneen, M. Mullins, and F. Murphy, “Autonomous vehi-
cles and embed-ded artificial intelligence: The challenges of fram-
ing machine driving deci-sions,”Applied Artificial Intelligence, vol.
33, no. 8, pp. 706–731, 2019.doi:10.1080/08839514.2019.1600301.
eprint:https://doi.org/10.1080/08839514.2019.1600301. [Online]. Avail-
able:https://doi.org/10.1080/08839514.2019.1600301.
[8] X. He, Y. Liu, C. Lv, X. Ji, and Y. Liu, “Emergency
steering control of au-tonomous vehicle for collision avoidance
and stabilisation,”Vehicle SystemDynamics, vol. 57, no. 8,
pp. 1163–1187, 2019.doi: 10.1080/00423114.2018.1537494.
eprint:https://doi.org/10.1080/00423114.2018.1537494. [On-
line].Available:https://doi.org/10.1080/00423114.2018.1537494.26
[9] A. Koike and Y. Sueda, “Contents delivery for autonomous driving cars
inconjunction with car navigation system, in2019 20th Asia-Pacific
NetworkOperations and Management Symposium (APNOMS), 2019,
pp. 1–4.doi: 10.23919/APNOMS.2019.8893082.
[10] U. Montanaro, S. Dixit, S. Fallah, M. Dianati, A. Stevens, D.
Oxtoby, and A.Mouzakitis, “Towards connected autonomous driv-
ing: Review of use-cases,”Vehicle System Dynamics, vol. 57,
no. 6, pp. 779–814, 2019.doi: 10.1080/00423114.2018.1492142.
eprint:https://doi.org/10.1080/00423114.2018.1492142. [Online]. Avail-
able:https://doi.org/10.1080/00423114.2018.1492142.
[11] A. Bennajeh, S. Bechikh, L. B. Said, and S. Aknine, “Multi-agent
coopera-tion for an active perception based on driving behavior: Appli-
cation in a car-following behavior,”Applied Artificial Intelligence, vol.
34, no. 10, pp. 710–729, 2020.doi: 10.1080/08839514.2020.1771837.
eprint:https://doi.org/10.1080/08839514.2020.1771837. [Online]. Avail-
able:https://doi.org/10.1080/08839514.2020.1771837.
[12] F. Cugurullo, R. A. Acheampong, M. Gu ´
eriau, and I. Dusparic, “The
transitionto autonomous cars, the redesign of cities and the future
of urban sustainabil-ity,”Urban Geography, pp. 1–27, Apr. 2020.doi:
10.1080/02723638.2020.1746096.
[13] A. Faisal, T. Yigitcanlar, M. Kamruzzaman, and A. Paz, “Mapping
two decadesof autonomous vehicle research: A systematic
scientometric analysis,” Journalof Urban Technology, vol. 0,
no. 0, pp. 1–30, 2020.doi: 10.1080/10630732.2020.1780868.
eprint:https://doi.org/10.1080/10630732.2020.1780868. [On-
line].Available:https://doi.org/10.1080/10630732.2020.1780868.
[14] T. Raviteja and R. V. I.S, An introduction of autonomous vehicles and
abrief survey,”Journal of Critical Reviews, vol. 7, pp. 196–202, Jun.
2020.doi:10.31838/jcr.07.13.33.
[15] C. Saghir and G. Sands, “Realizing the potential of autonomous
vehicles,” Planning Practice and Research, vol. 35, no. 3,
pp. 267–282, 2020. doi: 10.1080/02697459.2020.1737393.
eprint:https://doi.org/10.1080/02697459.2020.1737393. [Online].
Available:https://doi.org/10.1080/02697459.2020.1737393.
[16] E. Wigley and G. Rose, “Who’s behind the wheel? visioning the
future usersand urban contexts of connected and autonomous vehicle
technologies,”Ge-ografiska Annaler: Series B, Human Geography, vol.
102, no. 2, pp. 155–171,2020.doi: 10.1080/04353684.2020.1747943.
eprint:https://doi.org/10.1080/04353684.2020.1747943. [Online]. Avail-
able:https://doi.org/10.1080/04353684.2020.1747943.27
... In the last several years, an increase in accessible storage and computation capabilities has enabled deep learning to achieve success in supervised perception tasks, such as image detection. A neural network, after training for days or even weeks on a big data set, can be capable of inference in realtime with a model size that is no greater than a few hundred MB [1]. State-of-the-art neural networks for computer vision require huge training sets paired with extended networks capable of simulating such immense amounts of data. ...
... Our modifications to the network happen on the dense layers, which are changed to convolution, as reported in sermonette et al. [1]. When using our higher picture sizes of 640 × 480 this transforms the prior final feature response maps of size 1 × 1 × 4096 to 20 × 15 × 4096. ...
Article
Full-text available
This examination researches the use of profound learning methods, explicitly utilizing Convolutional Brain Organizations (CNNs), for ongoing recognition of vehicles and path limits in roadway driving situations. The study investigates the performance of a modified Over Feat CNN architecture by making use of a comprehensive dataset that includes annotated frames captured by a variety of sensors, including cameras, LIDAR, radar, and GPS. The framework shows heartiness in identifying vehicles and anticipating path shapes in 3D while accomplishing functional rates of north of 10 Hz on different GPU setups. Vehicle bounding box predictions with high accuracy, resistance to occlusions, and efficient lane boundary identification are key findings. Quiet, the exploration underlines the likely materialness of this framework in the space of independent driving, introducing a promising road for future improvements in this field.
Technical Report
Full-text available
An autonomous car is also called a self-driving car or driverless car or robotic car. Whatever the name but the aim of the technology is the same. Down the memory line, autonomous vehicle technology experiments started in 1920 only and controlled by radio technology. Later on, trails began in 1950. From the past few years, updating automation technology day by day and using all aspects of using in regular human life. The present scenario of human beings is addicted to automation and robotics technology using like agriculture, medical, transportation, automobile and manufacturing industries, IT sector, etc. For the last ten years, the automobile industry came forward to researching autonomous vehicle technology (Waymo Google, Uber, Tesla, Renault, Toyota, Audi, Volvo, Mercedes-Benz, General Motors, Nissan, Bosch, and Continental's autonomous vehicle, etc.). Level-3 Autonomous cars came out in 2020. Everyday autonomous vehicle technology researchers are solving challenges. In the future, without human help, robots will manufacture autonomous cars using IoT technology based on customer requirements and prefer these vehicles are very safe and comfortable in transportation systems like human traveling or cargo. Autonomous vehicles need data and updating continuously, so in this case, IoT and Artificial intelligence help to share the information device to the device. This review paper addressed what the technologies and strategies are used in autonomous vehicles by literature reviews and the gap between them.
Article
Full-text available
Autonomous cars controlled by an artificial intelligence are increasingly being integrated in the transport portfolio of cities, with strong repercussions for the design and sustainability of the built environment. This paper sheds light on the urban transition to autonomous transport, in a threefold manner. First, we advance a theoretical framework to understand the diffusion of autonomous cars in cities, on the basis of three interconnected factors: social attitudes, technological innovation and urban politics. Second, we draw upon an in-depth survey conducted in Dublin (1,233 respondents), to provide empirical evidence of (a) the public interest in autonomous cars and the intention to use them once available, (b) the fears and concerns that individuals have regarding autonomous vehicles and (c) how people intend to employ this new form of transport. Third, we use the empirics generated via the survey as a stepping stone to discuss possible urban futures, focusing on the changes in urban design and sustainability that the transition to autonomous transport is likely to trigger. Interpreting the data through the lens of smart and neoliberal urbanism, we picture a complex urban geography characterized by shared and private autonomous vehicles, human drivers and artificial intelligences overlapping and competing for urban spaces.
Article
Full-text available
This paper proposes a new safety system named emergency steering assist (ESA), which consists of the autonomous steering subsystem and differential braking subsystem. The control system is developed to mediate the conflict objectives of vehicle stabilisation and rear-end collision avoidance in highway driving. Instead of predefining a collision-free trajectory in the entire process of collision avoidance, the process of collision avoidance is divided into two stages. In stage 1, where the vehicle is guided to enter the adjacent lane with a constant centripetal acceleration to avoid collision, the steering manoeuvre is determined by the feedforward controller based on the steering dynamics. In stage 2, where the vehicle is guided along the centreline of the adjacent lane, the steering manoeuvre is determined by the controller based on the theory of model predictive control. In addition, the differential braking subsystem is designed with a comprehensive treatment to the requirements of collision avoidance and vehicle stabilisation. Finally, the simulation results demonstrate that the proposed ESA system can effectively achieve better balance between an emergency collision avoidance manoeuvre and vehicle stabilisation at high speeds in different conditions. The hardware-in-the-loop experiment developed by the authors is used to validate the real-time performance and effectiveness of the proposed control scheme.
Article
Full-text available
With the advent of autonomous vehicles society will need to confront a new set of risks which, for the first time, includes the ability of socially embedded forms of artificial intelligence to make complex risk mitigation decisions: decisions that will ultimately engender tangible life and death consequences. Since AI decisionality is inherently different to human decision-making processes, questions are therefore raised regarding how AI weighs decisions, how we are to mediate these decisions, and what such decisions mean in relation to others. Therefore, society, policy, and end-users, need to fully understand such differences. While AI decisions can be contextualised to specific meanings, significant challenges remain in terms of the technology of AI decisionality, the conceptualisation of AI decisions, and the extent to which various actors understand them. This is particularly acute in terms of analysing the benefits and risks of AI decisions. Due to the potential safety benefits, autonomous vehicles are often presented as significant risk mitigation technologies. There is also a need to understand the potential new risks which autonomous vehicle driving decisions may present. Such new risks are framed as decisional limitations in that artificial driving intelligence will lack certain decisional capacities. This is most evident in the inability to annotate and categorise the driving environment in terms of human values and moral understanding. In both cases there is a need to scrutinise how autonomous vehicle decisional capacity is conceptually framed and how this, in turn, impacts a wider grasp of the technology in terms of risks and benefits. This paper interrogates the significant shortcomings in the current framing of the debate, both in terms of safety discussions and in consideration of AI as a moral actor, and offers a number of ways forward.
Article
Full-text available
Collision avoidance and stabilisation are two of the most crucial concerns when an autonomous vehicle finds itself in emergency situations, which usually occur in a short time horizon and require large actuator inputs, together with highly nonlinear tyre cornering response. In order to avoid collision while stabilising autonomous vehicle under dynamic driving situations at handling limits, this paper proposes a novel emergency steering control strategy based on hierarchical control architecture consisting of decision-making layer and motion control layer. In decision-making layer, a dynamic threat assessment model continuously evaluates the risk associated with collision and destabilisation, and a path planner based on kinematics and dynamics of vehicle system determines a collision-free path when it suddenly encounters emergency scenarios. In motion control layer, a lateral motion controller considering nonlinearity of tyre cornering response and unknown external disturbance is designed using tyre lateral force estimation-based backstepping sliding-mode control to track a collision-free path, and to ensure the robustness and stability of the closed-loop system. Both simulation and experiment results show that the proposed control scheme can effectively perform an emergency collision avoidance manoeuvre while maintaining the stability of autonomous vehicle in different running conditions.
Article
Autonomous vehicles (AV) have become a symbol of futuristic and intelligent transport innovation. This new driving technology has received heightened attention from academic, public, and private sectors. Nonetheless, a big challenge limiting a clear understanding of AV research is its scale. A large volume of literature is produced—covering various fields. This paper aims to map out the research on AV for a better understanding of the trends, patterns, and interconnections, and it critically reflects on their implications for research. A scientometric analysis technique is applied to analyze 4,645 papers published between 1998 and 2017. The findings disclose that (a) 87.7 percent of the AV studies was conducted by educational institutes; (b) Europe is the most productive continent in AV research with a 35.9 percent share of publications; (c) North America is the most influential continent in AV research, receiving 41.1 percent of the citations; (d) Over 50 percent of the studies were conducted during the last three years of the analysis period; (e) Urban and social contexts of AV research are still at their early stage; and (f) Relatively limited collaboration and knowledge sharing between academia and industry exist.
Article
Perception is presented as a predominant concern in the functioning of a driving system, where it is necessary to understand how the information, events, and actions of each influence the state of the environment and the objectives of the driver, immediately and in the near future. In this context, we present in this paper a driving model composed of five layers which ensure the autonomy and road safety of a driver agent, in particular, we are interested in this article in the concept of perception which is translated by the first three layers of our driving model, which are: visual perception, comprehension and projection, where the execution of these three layers is based on the driving behavior adopted by the driver agent, which is in our case the car-following driving behavior. Furthermore, we present in this paper two simulation scenarios, the first one is realized based on urban area conditions, and the second one is conducted by using Next Generation SIMulation (NGSIM) dataset of a highway in Los Angeles, California. In this context, the experimental results present the effectiveness of our driving model based on the imitation of human behavior and according to reducing the duration of perception.
Article
Connected and Autonomous Vehicles (CAVs) are promised by their developers to transform mobilities, making travel accessible to all – including those unable to drive due to age, affordability or disability – and thereby widen the distribution of what Urry calls ‘network capital’. This paper interrogates promotional visualizations about CAVs as they imagine future automated mobilities and the scaling up of the technologies from small trials to mass roll-out. It analyses a wide range of images from a CAV trial in a UK city and demonstrates that these images reinforce rather than disrupt traditional gendered associations of automobility. This study further develops this work and notes other ways in which visualizations of CAV-enabled network mobility reiterate existing network capital inequalities. It also pays careful attention to the background urban environment in which CAVS are pictured. The paper argues that an absence of people and place specificity enable CAV technologies to be imagined as being used in other locations and contexts. Hence the visualizations of CAV that picture only specific forms corporeal mobility also work to envision the mobility of entrepreneurial capital, as the software and hardware behind the driverless vehicle is shown as transferable to, and profitable in, different contexts and situations.
Article
The rapid development of autonomous vehicle technologies has led to widespread interest in how and when they will affect mobility, lifestyles and the built environment. Despite the potential for bringing about significant changes, city and transportation planners have yet to fully incorporate autonomous vehicles in their planning activities. An online survey of planners found considerable awareness of, as well as agreement about, the potential effects of these developing technologies. To fully realize the potential benefits of autonomous vehicles, however, planners will need to be more proactive in including autonomous vehicles in their plans.