ResearchPDF Available

Fuzzy-Based Clustering for Larger-Scale Deep Learning in Autonomous Systems Based on Fusion Data

Authors:
69
1. Introduction
Large-scale deep learning for fusion-based autonomous systems relies heavily on fuzzy-based clustering. It is
challenging for autonomous systems to efficiently handle and analyze the massive volumes of data provided by their
many sensors. The effective operation of autonomous systems relies heavily on the accurate grouping of data pieces,
a task made easier by the use of fuzzy-based clustering. Fuzzy-based clustering is useful in autonomous systems
for fusing data from many sources, such as sensors, cameras, and various other inputs, into a unified picture of the
surrounding environment. This combined information aids in forming more informed judgments, and the precision
with which the data are clustered is increased by the use of fuzzy-based clustering [1].
Fuzzy-Based Clustering for Larger-Scale Deep Learning in
Autonomous Systems Based on Fusion Data
Ibrahim Najem1,2*, Tabarak Ali Abdulhussein3, M. H. Ali4, Asaad Shakir Hameed5,
Inas Ridha Ali6, M. Altaee7
1Department of Computer Techniques Engineering, Al-turath University College, Baghdad 10021, Iraq,
2MEU Research Unit, Middle East University, Amman 11831, Jordan, 3Department of Accounting, College
of Administrative and Financial Sciences, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq, 4Department
of Medical Device Technology Engineering, National University of Science and Technology, Thi Qar, Iraq,
5Department of Performance Quality, Mazaya University College, Thi-Qar, Iraq, 6Department of Business
Administration, Al-Mustaqbal University College, Babylon, Hilla, 51001, Iraq, 7Department of Medical Device
Technology Engineering, Alfarahidi University, Baghdad, Iraq
Emails: ibrahim.najim@turath.edu.iq; tabarak.ali@sadiq.edu.iq; mohammed.hasan@nust.edu.iq;
asaad.hameed@mpu.edu.iq; inas.ridha@uomus.edu.iq; m.altaee@alfarahidi.edu.iq;
Abstract
Problems in autonomous systems may be tackled with the help of the attention-based siamese fully convolutional
deep learning (AS-FC-DL) approach, which integrates autonomous fuzzy clustering and deep learning methods.
The system can anticipate human behavior on crowded roadways by employing these techniques to recognize
patterns and extract features from complicated unsupervised data. Each image point’s membership value is
associated with the cluster’s epicenter using the fuzzy clustering methodology in the AS-FC-DL approach. Using
least-squares methods, this approach finds the optimal position for each data point within a probability space,
which may be anywhere among multiple clusters. Data points from an unlabeled dataset may be organized into
distinct groups using a deep learning technique called cluster analysis. Data fusion from many sources, including
sensor data and video data, can improve the AS-FC-DL method’s precision and performance. The algorithm is able
to deliver an all-encompassing and precise evaluation of human behavior on crowded roadways by fusing data
from many sources. The AS-FC-DL approach may also be employed in autonomous vehicles (AV) to help them
learn from their experiences and improve their performance. Using reinforcement learning, a model for AV driving
may be constructed. The AS-FC-DL approach helps the self-driving car traverse the area with increased precision
and efficiency by allowing it to recognize structures and extract features from complicated unsupervised data.
Journal of Intelligent Systems and Internet of Things (JISIoT)
Vol. 09, No. 01, PP. 69-83, 2023
DOI: https://doi.org/10.54216/JISIoT.090105
Keywords: Autonomous Vehicles; Fuzzy clustering; Deep Learning; Fusion Data; Large-scale DL
Received: January 25, 2023 Revised: April 13, 2023 Accepted: June 09, 2023
70
Autonomous vehicles (AV) can be used in various ways, including substituting users in dangerous locations, executing
military operations, and carrying out mundane duties in the industry [2]. Human performance is dependable while
operating ground vehicles, and sudden changes in a driver’s surroundings are usually interpreted quickly [1]. One
of the benefits of using fuzzy logic to operate a vehicle over other control methods is integrating human knowledge
and experience into connections among the provided variables through language [3]. There are four key properties of
intelligent autonomous systems. Self-aware and sensitive to their work environment, they can quickly adapt to new
circumstances and are dependable, adhering to robust security [4] and privacy principles in their interactions with
others and their data [5,6]. It is common to encounter classification issues while working in highly dynamic contexts
with many classes [7]. It is only possible to sample or classify massive data streams to perform indirect analyses.
Sampling can give skewed findings since the size and quantity of samples obtained from that data have a large role
in its accuracy [8,9]. A good alternative for organizing and learning from massive volumes of data is clustering [10].
One of the most difficult problems in computer vision and large data processing is identifying thousands of visual
data elements [11]. The standard one-versus-all multi-class paradigm is time- and space-intensive because of many
classes [12]. Because the time complexity rises linearly with the number of categories, real-time practical applications
like autonomous robots find it difficult to train and test [13]. These self-driving systems need high bandwidth and
minimal latency [14].
Although AV have been used in various settings, their primary function is to travel from one location to another while
avoiding collisions and identifying obstacles [15]. Obstacle recognition and collision avoidance are now the only
issues remaining, which is why they are so difficult to solve at this point [16,17]. For an AV to go from one location
to another, it must have a system to recognize its surroundings [18]. This necessitates the employment of visual
technologies that allow the AV to recognize and avoid obstacles [19,20]. A vision system allows the robot to thrive
better in an environment marked by uncertainties – an inherent characteristic of urban or rural settings – even though
other means that require the use of model representation of the world are in place for obstacle-free path planning, as
will be seen shortly [21].
In this paper, AVs have many possible uses, many critical. As an example, dangerous procedures or scenarios will use
this technology. In part or whole, passenger car control is another common use for AV control. AV technology has the
potential to revolutionize transportation fundamentally. This technology in autos and light vehicles will likely reduce
crash pollution and congestion costs. The ever-increasing number of networked devices necessitates networking
technologies in several sectors. AV control can reduce traffic congestion by eliminating the needless stop-and-go
behavior caused by dense vehicle traffic or accident scenes. Binary bit vectors with a defined length are an effective
approximation strategy for data items. Based on their hamming distance from one another, binary vectors are grouped
in the FC approach using fuzzy matching, which is achieved by reversing error-correcting codes. The self-driving
technology stack includes recurrent neural networks as one of its most important components. An onboard video feed
assesses road signs, traffic lights, obstructions, and pedestrians. On the other hand, deep learning can make errors when
identifying things in images.
● AVs havetwo primarygoals: Reachingtheir destinationand avoidingcollisions. Inaddition, thecontroller
must adjust the vehicle steering angle and speed to achieve these goals.
● Vehiclenavigation control is a topic thathasdrawnthe attention of several researchers. However,human
performance in-ground vehicle navigation has been demonstrated to be trustworthy, and drivers react fast to
abrupt changes in their surroundings.
● Eventhoughothermethodsexist,fuzzylogichasshowntobeaviableoptionforhumancontrolofavehicle.
Human knowledge and experience can be incorporated into fuzzy control by language to link variables.
● Optimumfuzzy logiccanbenefitcontrolsystems forawiderangeof consumerdevices.Researchershave
recently considered fuzzy logic control as a possible solution to AV navigation.
It is possible to arrange the remaining autonomous systems properly. The related research on AV is described in
Section 2. The recommended study utilized to create this paper is described in Section 3 of the publication. A discussion
and a summary of the simulation’s findings may be found in Section 4. Section 5 then brings this investigation to a
close by going into great detail about the observation and its results.
2. Related Works
Recently, the automobile business has been disrupted by developing autonomous system research. According to traffic
statistics, 94% of road accidents are caused by driver-related problems, such as improper movements and inattentive
driving. Automating automobiles can considerably decrease human errors since inebriated behavior, and attention is
71
avoided. An accident resulting from a driver’s inattention or error can be greatly reduced using AVs. In addition, cars
can have the capability to execute precise maneuvers to avoid a collision. The existing methods related to AVs are
listed below.
2.1. Linear parameter varying (LPV)
As AVs become more common, they are predicted to eliminate human errors and contribute to major advances in safety,
mobility, and environmental impacts. Automation of roadways is possible thanks to advances in technology [22]. This
analysis provided new modeling techniques for AV control system design. LPV was a control-oriented model in a
predefined structure. There were machine-learning approaches used to choose the LPV model’s scheduling variables.
An optimization approach was used to construct the LPV model parameters, resulting in an accurate fit to the dataset.
LPV-based vehicle models were utilized to govern AVs less in path analysis capability.
2.2. Human activity recognition (HAR)
A broad area of computer science research was focused on HAR. Improving HAR could provide breakthroughs in
humanoid robotics, medical robots, and self-driving vehicles [23]. Safe and more sympathetic autonomous systems
might be created by recognizing human action without mistakes or abnormalities. More than one way of estimating an
action’s direction was offered to improve the network and less in-vehicle cybersecurity.
2.3. Deep reinforcement learning (DRL)
The connected AV (CAV) connection component made vehicle-to-External communication, which makes it easier
to provide traffic-related information to cars [24]. In this research, they offer a DRL-based system that fuses data
acquired through sensing and connection capabilities of other vehicles around the CAV and those situated farther
downstream and then utilize the fused data to guide lane change, a unique context of CAV operations. Implementing
the algorithm in CAVs was intended to improve the safety and mobility of CAV driving operations and had high
collision avoidance.
2.4. Winner determination problem (WDP)
As AVs become more widely available, car sharing and other forms of joint ownership will become more popular [25].
An auction market for fractional ownership of AVs that is both unique and combinatorial was examined in this research.
The WDP should use bidders’ location information to share a car to work. They develop the WDP, a key component of
different auction designs and pricing methods, in discrete and continuous-time contexts with less stability ratio.
2.5. Virtual reality approach (VRA)
Game theory describes and governs AV pedestrian interactions to fight road space while avoiding accidents [26]. VRA
scenarios with more realistic pedestrian behavior were used to test the realism of the simulators’ pedestrian simulations.
According to these results, virtual reality trials could be used to forecast the game-theoretic parameters that AVs will
need to interact with pedestrians in the future. However, they were less accurate in estimating images. The suggested
solution has solved the problems with the current model. It has been suggested that this work will improve vehicle
stability, accuracy, path analysis, and collision avoidance.
2.5.1. Proposed method: Autonomous system- fuzzy clustering-deep learning
Intelligent Transportation Systems (ITS) have recently developed automated systems that have increased safety and
comfort for drivers. Cruise control and adaptive cruise control and emergency braking systems with active suspension
help, automated parking, and detection of vehicles to the rear using vision systems are currently standard features in
commercial vehicles. The ultimate objective of automotive technology is the development of self-driving automobiles.
An experienced driver’s knowledge and skill are essential while operating a vehicle. The benefit of using fuzzy logic-
based methods to handle complicated and nonlinear systems like autonomous driving is that they can represent expert
knowledge. Particularly in ITS, fuzzy logic has become a popular tool. Fuzzy controllers effectively control an AV,
allowing for a comfortable and secure ride.
Figure 1 shows the development and implementation of an AV. The driver does not have to be involved in the operation
of an automated vehicle. A wide range of modern technologies is required to make these vehicles work properly.
Actuators, obstacle detectors, and central control devices are all included in the automation process. A further peek
at the underlying architecture of the self-driving automobile system is provided now. Self-driving cars need to detect
danger and maneuver the vehicle, respectively. Road traffic assistance and adaptive cruise control are included in
72
the package, and lane departure and speed limit warnings are included. In recent years, fuzzy logic technology has
advanced substantially, allowing self-driving cars in smart cities to become a reality sooner.
At present, self-driving vehicles lack the accuracy and reliability needed to compete with human drivers in these
areas. Due to existing deep learning technology limits, the sensor deployed cannot receive and evaluate information
simultaneously as humans gather data about the vehicle system. It is no surprise that fuzzy logic technology is getting
much attention since it is essential for gathering and analyzing data. Since it has low latency and large bandwidth,
fuzzy is suitable for providing people with real-time wireless network data and location identification [27].
Large data streams F can only be processed in indirect ways, such as by sampling Nk and grouping them into clusters
El are defined as
FNEG
kld

,,
FN
j
j
f
kNk
0
2 (1)
As revealed in equation (1), real-time practical technologies Gd including autonomous robots, f find it difficult to train
and test due to the increasing time complexity j that rises linearly with the number of classifications.
Data are labeled using a hashing mechanism in the proposed, Fuzzy-based Clustering technique’s close matching
process k is described as,
Rz kz kz


11
11
22
22 (2)
As revealed in equation (2), R (z) should be used as the label, z denotes an expression for the data labeling polynomial.
A 12-bit binary vector with 11 parity check bits D (z) should be used to encode the data item is stated as,
Dz kkzk
zk
z

 
01 2
2
10
10 (3)
As revealed in equation (3), polynomials k can be generated rather than parity checks to reduce the z complexity of the
implementation.
Figure 1: Develop and implement autonomous vehicle based on fusion data.
73
Fuzzy matching H (z) is an approximation method similar to the one suggested in-vehicle cybersecurity, which gives
phonetically comparable words a decreased dimensionality is given as,
Hz Rz
Dz



��
(4)
As revealed in equation (4), a data item’s label vector R (z) has to be a multiple of the generating polynomial’s
coefficients D (z).
Network networks can rapidly and accurately analyze autonomous car sensors using fuzzy logic. Modern computers
and deep learning can be used long-term to create self-driving automobiles. Driver assistance systems (DAS) aim
to prevent or minimize an accident’s severity even before it occurs. These gadgets can alert the driver by sound or
light if a collision is detected. For humans and AVs, self-localization is essential since it provides the basis for spatial
configuration and route planning.
Figure 2 shows the fundamentals of the Steering Fuzzy Controller. Fuzzy controllers are built for each driving duty to
integrate human skills and experience into the controller design as quickly as possible. The primary responsibilities
are to steer the vehicle in the desired direction while avoiding collisions with other objects. They have an advanced
Collision Avoidance System with Fuzzy Target Steering. The Fuzzy Steering module is designed to meet these two
objectives. The next section uses two more modules to accommodate more complex setups or needs: Two fuzzy
modules, one for target steering and the other for collision avoidance. The sum of two fuzzy module outputs gives
the overall steering angle. Due to the higher weight given to the collision avoidance steering fuzzy module output, a
significant impact on the vehicle’s behavior can be seen in a very short period due to this collision avoidance steering
fuzzy controller.
The dissertation’s fuzzy modules use a mixture of sigmoid membership functions. Sigmoid membership h (z) is stated
as follows:
hz
e
dzb

1
1()
(5)
Figure 2: Fundamentals of steering fuzzy controller.
74
As revealed in equation (5), simple logic d and observations of drivers’ behavior, and a description of the factors in
play, inform the report’s usage of membership functions zb and controlling module rules.
It is possible to acquire the sum of two sigmoid membership functions ϑ (z) by multiplying them together dT is given as
z
ee
dz
bd
zb
KK
TT



1
1
1
1
()
()
(6)
As revealed in equation (6), the intention of this module dK is to direct the vehicle bK in the direction bT of a certain
destination z.
Figure 3 shows the fuzzy logic clustering in an autonomous system. Driving issues occur into a category that depends
on the underlying system to make sense of and cope with ambiguity. All human cognition and behavior components
must be integrated into environmental perception and driving duties if a car drives like a human. Physical components
can be used to alter the attributes of another physical system using a fuzzy logic control system. Both open and closed
control systems are used in various applications. In closed-loop control, the efficiency of the physical system has no
bearing on the input control impact.
In contrast, in a closed-loop control system, the impact of input control relies on the physical system’s performance.
Measurement is the initial step in controlling physical variables, and the sensor detects the regulated signal. Plant
physical systems are under the direct supervision of the system’s controllers. Consequently, a system’s output
characteristics define its input power signal in closed-loop control. The physical system in question can produce a
certain output, and the discrepancy between the anticipated and actual responses indicates a problem using an error
signal. Closed-loop control can benefit from additional positive feedback and performance from a compensator or
regulator system. Fuzzy rules have been used to keep the car in the middle of the right lane using AVs and Real-time
lane detection and tracking. According to vehicle speed and road lane centering, a fuzzy controller produces a steering
angle in response to the vehicle’s direction of travel.
2.5.2. Path analysis
An AV’s control settings are first determined by processing a local reference route. The lateral offset, heading angle
error, and distance to the first curve are all retrieved from the route data.
It is required that the angle between the existing vehicle heading and a reference path’s JB Near-matching direction is
smaller than a predetermined threshold.
Jarctanzz
xxRjJ
B
jj
jj
fg

1
1
, (7)
Figure 3: Fuzzy logic clustering an autonomous system.
75
As revealed in equation (7), where zj and xj are the variables of the location on the standard path, which are specified
under the Rfg coordinates of the vehicle body.
2.5.3. Conditions of stability
The fuzzy inference surface cotv can be designed under the system stability criteria. Because they have distinct fuzzy
inference surfaces, the stability of the heading angle error and lateral offset h is examined independently in the system
dynamics K provided.
{()zzcotv
hz
z
1
2
12 2
0 
(8)
As revealed in equation (8), feedback linearization theory z is used for the lateral control system stability study.
There are three types of steering modules: the driver’s task, which includes target steering and obstacle detection
guiding; the bug steering module, which deals with issues like trapping and drifting; and the vehicle orientation
controller, which uses appropriate inputs and fuzzifies them to produce a rectified steering angle. The Fuzzifier’s job
is to smooth down the sharp input data, and the input specifies the input and output variables for the fuzzy rule base
and the plant under management. Defuzzification is the process of converting a fuzzy collection into a single integer.
The center of gravity defuzzification approach defuzzifies the output of each behavioral controller. The center of area
approach (COA), the centroid method, is the most often used defuzzification technique. This function returns the crisp
value corresponding to the area’s fuzzy set’s center.
Figure 4 shows the strategy of recurrent neural networks. There are a variety of onboard sensors, including cameras,
radars, LiDARs, ultrasonic sensors, and GPS units in self-driving vehicles. They use this information to make
autonomous decisions based on a data stream. The computer in the vehicle makes driving decisions based on these
Figure 4: Strategy of recurrent neural networks.
76
observations. Perception, planning, and action can all be accomplished through a single pipeline. The modular
pipeline’s constituents can be created potentially utilizing deep learning technologies. A safety monitor monitors each
module’s safety.
One option is to use an action-oriented pipeline of perception-planning-action to implement the architecture. Deep
learning can be used to create the components of a sequential pipeline. A safety monitor normally ensures each
module’s safety. Behavior arbitration, high-level path planning, perception and localization, and motion controllers
are four high-level components to consider when grouping deep learning articles covering AV system methodologies.
RNN addressing safety, data sources, and hardware considerations for creating deep learning modules for self-driving
vehicles are included in this collection.
It is possible to approximate the temporal dynamics of sequence data with accuracy dv using classifier Wv and the input
layer of RNN
v
r
is described as
v
r
v
r
v
r
v
Ut Wg d
  


1 (9)
As revealed in equation (9), the gradient signal Uv can be amplified τ as many times as the period ranging Ut
v
r
.
Neural architectures d (r) and its hidden layer
v
r

are learned using recurrent layers, which utilize temporal correlations
of sequence data secg is given as,
dr secg Ut Wg
ed
r
v
r
d
r
d
r
dg





  
**
11 (10)
As revealed in equation (10), the activation weights are denoted by Ut
d
r

, while the bias values are represented by
Wg
d
r1. g and ed stands for element-by-element multiplication.
Gradient vanishing leads the weights in the network to stop updating properly; resulting in a very low weighting factor
and the output layer of RNN h (r) is defined as
hr secg dr
v
r




* (11)
As revealed in equation (11), the weights of network gates and memory cells are combined by the input layer
v
r
and
secg (d (r)) hidden layer.
An autonomous car’s first challenge is comprehending and locating itself in the surrounding environment, given a
predetermined path through the road network. The vehicle’s further activities are guided by the arbitration mechanism
shown in this illustration. An automated motion control device responds to errors that occur during movement. An
overview of deep learning for AV s will be provided in the following sections, along with a look at the various design
techniques used to create this hierarchical decision-making process.
Figure 5 shows ensuring safety and security using deep learning. To determine the distance between nearby objects,
an AVhi is expected to include a camera, a smart radar scanner, and a transmitter for communicating vital acceleration
speed position (AVP) beacon with closer AVhi, among other sensors, all built into the device. Sensor data and beacon
data can be used to control better AVhi in the Intelligent Transportation System (ITS) to ensure the best possible traffic
flow. If an attacker is present, it is possible that their actions can cause the system to malfunction, increasing the risk of
an accident or a decrease in traffic flow. A malicious opponent might inject inaccurate data into the AVhi system using
sensor readings and beacons; therefore, the system needs more than just improved road control and administration.
They want to provide a dynamic system that can accurately estimate how far apart AVhi are in connection to each other.
Acquiring and processing information about the vehicle’s velocity, location, and, most critically, its distance from the
surrounding vehicles or objects and their velocities is essential for any AV (AVhi). Focused on the automobile following
model for this project. AVhi must be familiar with the AVhi essential parameters to function properly. It is possible to
describe AVhi updated speed as an expression of how fast its lead vehicle moves without collision avoidance.
( ) ( )
( )
'
1
ˆ()
i ii
r rr
µ ρµ µ
= + (12)
77
As revealed in equation (12), AVhi indicates an AV,
ir
'

’s predicted velocity, and ρ the response factor. AVhi must
constantly monitor to make adjustments to their spacing and prevent an accident μi (r) from occurring. As the next AV
moves forward, each AVhi can use its onboard sensors
( )
1
ˆ
i
r
µ
, including cameras and radar, as well as beacon signals
from AVhi and data from Smart Roadside Units to estimate their speed (sRSU). Autonomous decision-making systems
are at the core of self-driving vehicles. They can handle data streams from various cameras or inertia sensors. Deep
learning algorithms are used to model this data, and the resulting models make decisions that are appropriate to the
current situation the vehicle is in. Vehicles must change and develop to the ever-changing behavior of other vehicles
around them. AVs with deep learning can make judgments at the moment. This improves the safety and dependability
of AVs.
Figure 6 shows the predictive control block diagram for an AV. In recent decades, the fast advancement of technology
has resulted in a wide range of new solutions that make life easier and more secure for users. The long-term aim of the
robotics industry is to reduce the amount of manual labor performed daily by humans while increasing the precision,
speed, and power required for any operation. The main areas of robotics applications that have advanced the most
are automated systems and small autonomous robotic vehicles that move autonomously and communicate data to a
central control station over a local or global network. Recent advancements in autonomous robotic vehicle remote
control and monitoring have been remarkable. User mechatronics, deep learning, and multi-agent systems are a few
technologies used in the robotic vehicle sector to help with the automotive operation. A driver’s state is a classification
of the driver’s conduct into a set number of states. A vehicle must have some traits to qualify as intelligent or smart.
An action engine can characterize a semi-automatic vehicle as utilizing automation for challenging activities, such
as navigation.
Autonomous or robotic vehicles, on the other hand, rely only on automation technologies. Since the invention of the
integrated circuit, the complexity of automation systems has increased. Following that, researchers and automakers
created a wide range of automated functions, mostly for use in autos. V2V and V2I protocols enable communication
between infrastructure and vehicles to support services such as navigation, entertainment, and traffic safety. Autonomous
operation is commonly defined as the ability of a vehicle to regulate its environment and maneuver around it without
human intervention. Radar, LIDAR (Light Detection and Ranging), GPS, Odometry, and 3D scanning are technologies
used by AVs to gather their surroundings. Advanced control systems collect and interpret data from the sensors to
determine acceptable routes, impediments, and important markers and then use this knowledge to make decisions.
Figure 5: Ensuring safety and security using deep learning.
78
Multiple radars, lidars, and cameras can be combined into a single model or image to perform sensor fusion. The
balanced strengths of the various sensors result in a more accurate model.
Autonomous automobiles include control systems that can evaluate sensor data to discriminate between various
vehicles on the road to design a route to the intended destination. The vehicle will be able to control itself under the
phase autonomously. Many historical research initiatives on vehicle autonomy depend largely on artificial parts of
their surroundings, such as magnetic tapes, for their automation. There must be a long-term capacity to function well
in severe environmental uncertainty and compensate for possible system damage without external intervention. As can
be observed from several initiatives, it is regularly suggested to enhance the capabilities of the AV by implementing
communication networks with both nearby vehicles (Collision Avoidance) and long-distance vehicles. Some now
consider the vehicle’s behavior or potential autonomy with two more outside inputs. The proposed method enhances
vehicle cybersecurity, stability, accuracy, path analysis, and collision avoidance.
3. Results and Discussion
When AVs are used, human error will be eliminated, allowing greater mobility, safety, and sustainability. AVs have
already started to emerge on highways across the globe. Diverse public and commercial institutions have taken the
technology through its paces before. The three essential components of the self-driving system are orientation, visibility,
and decision-making. There are several duties that each element is responsible for utilizing a variety of technologies.
This technology’s reliability and accuracy are improved due to the inclusion of cameras and internal measurement
equipment. Perception is another word for AV’s visibility, used to identify infrastructure and traffic. These maps allow
the vehicle to comprehend its immediate surroundings and prepare for turns and junctions outside the range of view
of its sensor systems. The proposed method analyzes vehicle cybersecurity, stability, accuracy, path analysis, and
collision avoidance.
Table 1 shows the vehicle cybersecurity ratio. AVs can use deep learning to identify aberrant network traffic patterns
caused by malware. The invasion detection module necessitates using recognition architecture to identify and halt
malware attacks on the vehicle through a smartphone. AV decision-making can be improved using an RNN. Initial
forecasting of nearby vehicle movements is made probabilistically based on assessing the potential danger. For each
state, a quantitative and accurate risk assessment function synthesizes the current Time-To-Collision TTC and Time-
Headway TH, and the originals created by the coherent threat estimate tool are applied to analyze the danger in every
Figure6:Apredicvecontrolblockdiagramforanautonomousvehicle.
79
state quantitatively and properly. Thus, the safest route can be constructed by the best search. AVs are more vulnerable
to hacks because of the increased reliance on digital technology. Malicious hackers can access automated vehicles and
infrastructure remotely through the communication networks installed in such infrastructure elements. As a result,
a model for classifying cyber threats can be utilized that considers the vulnerabilities found in software. Cyber-risk
can be represented using an RNN model, prefaced by the variables. The proposed method enhances cybersecurity by
95.2% compared to the existing methods.
Table 2 shows the stability ratio. Stability is the capacity to absorb external forces, which applies to the vehicle. If
the vehicle travels in a straight path at an even speed, it is at its most stable point. Vehicle speed, lateral forces, tyre
cornering characteristics, steering system stiffness, steering transmission ratio, and the vertical axis of inertia moments,
to name a few, are additional factors that affect vehicle handling and driving stability. The advantages of self-driving
cars are clearly exciting, but the path to complete transportation autonomy is likely to be difficult and uncertain. When
managing an AV, these systems add complexity because various control systems are engaged based on the FC. They
all have to operate together, offering stability guarantees and withstanding structural and environmental changes in a
dynamic environment. The proposed method enhances the stability ratio by 96.8% compared to the existing methods.
Table 3 shows the accuracy ratio. A car must constantly learn and adapt to keep up with the ever-changing driving
habits of other vehicles. Self-driving cars can make choices in real time due to deep learning. This improves the safety
Table 2: Stability ratio
Number of vehicles LPV VRA AS-FC-DL
10 57 72 79
20 51.3 61.6 85
30 45 73 76.3
40 56 75 91
50 48.4 65 83
60 54 67 94.4
70 52 69.1 77
80 46 63 93
90 53.9 68 88.9
100 49 70.5 96.8
VRA: Virtual reality approach, LPV: Linear parameter varying
Table 1: Vehicle cybersecurity ratio
Number of vehicles LPV VRA AS-FC-DL
10 44.9 64 78.5
20 54.5 67 84
30 55 64 79
40 58 70.8 91
50 50.1 61.3 76
60 56 63.7 85.7
70 59 66 79
80 57 68.4 80
90 53 59 85
100 52.8 69 95.2
VRA: Virtual reality approach, LPV: Linear parameter varying
80
and trustworthiness of self-driving vehicles. DL helps AVs in several ways, including processing and interpreting vast
volumes of data supplied by the vehicle’s cameras and sensors fast and helping to enhance vehicle fuel economy and
safety. Images or objects can be classified using RNNs because the network predicts a label or number for each image
(the input and the output are both known). The network is utilized to lower the error rate since the images’ labels
are known. It is utilized by a huge number of self-driving cars to navigate in real time. With astonishingly precise
depth perception, LiDAR can measure an object’s distance from as far away as 60 m away. The suggested technique
improves accuracy by 98.4% over the currently used approaches.
Figure 7 shows the path analysis ratio. Path planning allows an AV to identify the safest, most feasible, and cost-effective
route between two sites. The predictive control method solves a nonlinear optimization problem for a finite-time and
time-constrained optimum control issue. Route planning and control are utilized to overtake two-lane roadways using
nonlinear PCM. There are two techniques to map out a route (hierarchical and parallel). Designing long-term missions
for AVs takes a fraction of the time and effort it used to. The higher-level objective is broken down into smaller tasks
and reassembled at each subsequent level. Complex problems are easier to solve when using the hierarchical approach;
however, fine motor skills are hampered because of their impact on vehicle performance. The parallel technique allows
many tasks to be accomplished at the same time. Each controller has its unique sensors and actuators. Due to the high
frequency of their operation, control systems provide a high degree of fluidity and performance. Complex motion
planning technologies are not required. Path planning is improved by 91.8% using the AS-FC-DL approach instead of
the previous methods.
Figure 8 shows the collision avoidance ratio. Collision avoidance systems warn, alert, or assist them in some way to
help drivers avoid crashes. Different technologies and sensors are used in collision avoidance systems, including radar,
lasers, cameras, GPS, and Deep learning. Fuzzy logic with RNN-based methods is used to address the navigation
Table 3: Accuracy ratio
Number of vehicles LPV VRA AS-FC-DL
10 49.7 64 76.5
20 48 63.5 82
30 57 67 78.9
40 60.5 72 84
50 47 63 77
60 52 61.7 92
70 55 65.5 90
80 45 73 94
90 51 71 89
100 53 74 98.4
VRA: Virtual reality approach, LPV: Linear parameter varying
Figure7:Pathanalysisrao.
81
control issue. Interested readers should consult to learn more about the FC-based approach to collision avoidance
methods. Fuzzy logic and RNN reactive have been coupled to provide fine, resilient, and adaptable control; however,
fuzzy logic can be used alone to provide quick and noticeably smooth control. It is not easy to classify mobile robots’
navigation process under any subsections, while autonomous robots’ categorization must have some audience. The
robot keeps a copy of the world’s model for obstacle-free route planning in a DL-based approach. This technique
focuses solely on using sensor data from the surroundings to make navigational decisions. Systems employ sensor data
for autonomous navigation. This idea expands the paradigm to encompass both goal- and target-oriented approaches
instead of just the environment’s map. Comparing the proposed strategy to the current approaches, collision avoidance
is reduced by 44.1%. The proposed method assessed the vehicle’s stability, accuracy, path analysis, and collision
avoidance.
4. Conclusion
Fuzzy-based clustering for large-scale deep learning has been shown in this study employing autonomous systems.
The steering angle and vehicle velocity are controlled by both controllers, which receive as input the longitudinally,
lateral, and direction errors. Current deep-learning approaches are either being utilized or considered to create several
components for self-driving vehicles. The game-theoretic technique assesses the interaction between the attacker and the
AV. Reduce the implementation time in various systems using deep learning to build fuzzy rules and fine-tune variables
of a vehicle’s fuzzy logic controllers. The effectiveness of AV methods will likely be shown through simulations.
The underlying environmental components should form the basis for all viable combinations of autonomous and
human-driven vehicles. The strategy becomes more complicated because the AVs will operate on an unstable road
full of human-driven vehicles. Multiple models with force computations for each tyre are needed to calculate the
steering angle while driving. Future research is anticipated to employ the fuzzy logic controllers suggested in this
study to control autonomous underwater and aerial vehicles. The proposed approach boosts vehicle cybersecurity
(95.2%), stability (96.8%), accuracy (98.4%), path analysis (91.8%), and collision avoidance (44.1%), according to
the numerical results.
5. Funding
“This research received no external funding.”
6. Conflicts of Interest
“The authors declare no conflicts of interest.”
References
[1] M. Carranza-García, J. Torres-Mateo, P. Lara-Benítez, and J. García-Gutiérrez, “On the Performance of
One-Stage and Two-Stage Object Detectors in Autonomous Vehicles Using Camera Data,” Remote Sensing,
vol. 13, no. 1, p. 89, 2021.
[2] G. Li, Y. Yang, T. Zhang, X. Qu, D. Cao, B. Cheng, and K. Li, “Risk Assessment Based Collision Avoidance
Decision-Making for Autonomous Vehicles in Multi-scenarios,” Transportation Research Part C: Emerging
Technologies, vol. 122, p. 102820, Jan. 2021.
[3] M. Deveci, D. Pamucar, and I. Gokasar, “Fuzzy Power Heronian Function Based CoCoSo Method for the
Figure8:Collisionavoidancerao.
82
Advantage Prioritization of Autonomous Vehicles in Real-Time Traffic Management”, Sustainable Cities
and Society, vol. 69, p. 102846, Jun. 2021.
[4] M. H. Ali, and M. F. Zolkipli, “Intrusion-Detection System Based on Fast Learning Network in Cloud
Computing,” Advanced Science Letters, vol. 24, no. 10, pp. 7360-7363, 2018.
[5] A. Alsalman, L. N. Assi, S. Ghotbi, S. Ghahari, and A. Shubbar, “Users, Planners, and Governments
Perspectives: A Public Survey on Autonomous Vehicles Future Advancements,” Transportation
Engineering, vol. 3, p. 100044, Jan. 2021.
[6] W. Ma, and S. Qian, “High-resolution Traffic Sensing with Probe Autonomous Vehicles: A Data-Driven
Approach,” Sensors, vol. 21, no. 2, p. 464, Jan. 2021.
[7] N. Tork, A. Amirkhani, and S. B. Shokouhi, “An Adaptive Modified Neural Lateral-Longitudinal Control
System for Path Following of Autonomous Vehicles,” Engineering Science and Technology, an International
Journal, vol. 24, no. 1, pp. 126-137, Feb. 2021.
[8] Q. Meng, and L. T. Hsu, “Integrity for Autonomous Vehicles and Towards a Novel Alert Limit Determination
Method.” in Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile
Engineering, vol. 235, no. 4, pp. 996-1006, 2021.
[9] D. J. Yeong, G. Velasco-Hernandez, J. Barry, and J. Walsh, “Sensor and Sensor Fusion Technology in
Autonomous Vehicles: A Review,” Sensors, vol. 21, no. 6, p. 2140, Mar. 2021, doi: 10.3390/s21062140.
[10] S. Szénási, G. Kertész, I. Felde, and L. Nádai, “Statistical Accident Analysis Supporting the Control of
Autonomous Vehicles,” Journal of Computational Methods in Sciences and Engineering, vol. 21, no. 1,
pp. 85-97, Jan. 2021.
[11] R. Bridgelall, and E. Stubbing, “Forecasting the Effects of Autonomous Vehicles on Land Use,”
Technological Forecasting and Social Change, vol. 163, p. 120444, 2021.
[12] M. H. Hasan, and P. Van Hentenryck, “The Benefits of Autonomous Vehicles for Community-Based Trip
Sharing,” Transportation Research Part C: Emerging Technologies, vol. 124, p. 102929, Mar. 2021.
[13] S. Feng, X. Yan, H. Sun, Y. Feng, and H. X. Liu, “Intelligent Driving Intelligence Test for Autonomous
Vehicles with Naturalistic and Adversarial Environment,” Nature Communications, vol. 12, no. 1, p. 748,
Feb. 2021, doi: 10.1038/s41467-021-21007-8.
[14] X. Sun, S. Cao, and P. Tang, “Shaping Driver-Vehicle Interaction in Autonomous Vehicles: How the New
in-vehicle Systems Match the Human Needs,” Applied Ergonomics, vol. 90, p. 103238, Jan. 2021.
[15] J. Godoy, V. Jiménez, A. Artuñedo, and J. Villagra, “A Grid-Based Framework for Collective Perception in
Autonomous Vehicles,” Sensors (Basel), vol. 21, no. 3, p. 744, Jan. 2021, doi: 10.3390/s21030744.
[16] A. Khadka, P. Karypidis, A. Lytos, and G. Efstathopoulos, “A Benchmarking Framework for Cyber-Attacks
on Autonomous Vehicles,” Transportation Research Procedia, vol. 52, pp. 323-330, 2021.
[17] Q. D. Tran, and S. H. Bae, “An Efficiency Enhancing Methodology for Multiple Autonomous Vehicles in
an Urban Network Adopting Deep Reinforcement Learning,” Applied Sciences, vol. 11, no. 4, p. 1514, Feb.
2021.
[18] T. Gruden, N. B. Popović, K. Stojmenova, G. Jakus, N. Miljković, S. Tomažič, and J. Sodnik,
“Electrogastrography in Autonomous Vehicles--an Objective Method for Assessment of Motion Sickness
in Simulated Driving Environments,” Sensors (Basel), vol. 21, no. 2, p. 550, Jan. 2021, doi: 10.3390/
s21020550.
[19] A. B. Parsa, R. Shabanpour, A. Mohammadian, J. Auld, and T. Stephens, “A Data-Driven Approach to
Characterize the Impact of Connected and Autonomous Vehicles on Traffic Flow,” Transportation Letters,
vol. 13, no. 10, pp. 687-695, 2021.
[20] A. Bogyrbayeva, M. Takalloo, H. Charkhgard, and C. Kwon, “An Iterative Combinatorial Auction Design
for Fractional Ownership of Autonomous Vehicles,” International Transactions in Operational Research,
vol. 28, no. 4, pp. 1681-1705, 2021.
[21] P. Stasinopoulos, N. Shiwakoti, and M. Beining, “Use-stage Life Cycle Greenhouse Gas Emissions of
the Transition to an Autonomous Vehicle Fleet: A System Dynamics Approach,” Journal of Cleaner
Production, vol. 278, p. 123447, 2021.
[22] D. Fényes, B. Németh, and P. Gáspár, “A Novel Data-Driven Modeling and Control Design Method for
Autonomous Vehicles,” Energies, vol. 14, no. 2, p. 517, Jan. 2021.
[23] M. Tammvee, and G. Anbarjafari, “Human Activity Recognition-Based Path Planning for Autonomous
Vehicles,” Signal, Image and Video Processing, vol. 15, no. 4, pp. 809-816, Jan. 2021.
[24] A. O. Salman, and O. Geman, “Evaluating Three Machine Learning Classification Methods for Effective
COVID-19 Diagnosis. International Journal of Mathematics, Statistics, and Computer Science, vol. 1,
pp. 1-14, 2022, doi: 10.59543/ijmscs.v1i.7693.
83
[25] M. Takalloo, A. Bogyrbayeva, H. Charkhgard, and C. Kwon, “Solving the Winner Determination Problem
in Combinatorial Auctions for Fractional Ownership of Autonomous Vehicles,” International Transactions
in Operational Research, vol. 28, no. 4, pp. 1658-1680, 2021.
[26] F. Camara, P. Dickinson, and C. Fox, “Evaluating Pedestrian Interaction Preferences with a Game
Theoretic Autonomous Vehicle in Virtual Reality,” Transportation Research Part F: Traffic Psychology
and Behaviour, vol. 78, pp. 410-423, 2021.
[27] A. A. Salamai, “An Approach Based on Decision-Making Algorithms for Qos-Aware Iot Services
Composition,” Journal of Intelligent Systems and Internet of Things, vol. 8, no. 1, pp. 8-16, 2023.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
SARS-CoV2, which produces COVID-19, has spread worldwide. Since the number of patients is rising daily, it requires time to evaluate laboratory data, limiting treatment and discoveries. Such restrictions necessitate a clinical decision-making tool with predictive algorithms. Predictive algorithms help healthcare systems by spotting disorders. This study uses machine learning and laboratory data to predict COVID-19 patients. Recall, Precision, accuracy, and AUC ratings assessed our models' prediction performance. Models were verified with 10-fold cross-validation and train-test split methods using 18 laboratory data from 600 patients. This research compared three different classification approaches—Support Vector Machines (SVM), artificial neural networks (ANN), and k-Nearest Neighbors (k-NN). According to the findings, SVM achieved the most significant average accuracy (89.3%), followed by ANN (88.5%) and kNN (86.6%). The accuracy rates of all three approaches were relatively reasonable, with SVM being the best of the bunch. The results of this research indicate that classification using machine learning methods has the potential to be used in developing reliable COVID-19 diagnosis systems, thereby facilitating the fast and accurate diagnosis of COVID-19 cases and facilitating proper therapy and management of COVID-19 patients. More work might be done to refine these techniques and include them in useable diagnostic frameworks.
Article
Full-text available
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Article
Full-text available
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion.
Article
Full-text available
In this paper, a novel framework for a benchmark system for autonomous vehicles focusing on their security and reliability is proposed. Computer vision and networking technologies are improving offering solutions towards automation in connected autonomous vehicles. These systems are using sensor technologies, including vision and communication, providing information and measurements for the environment and other connected vehicles. As a result, unlike conventional vehicles, autonomous vehicles have to communicate with other vehicles as well as other external network infrastructure. However, such requirements make autonomous vulnerable to the attack. This may also motivate diverse types of cyber threats and attacks like traffic signs modification, GPS spoofing, and Vehicular Adhoc network distributed denial of service. Hence, this paper explores various aspects of security issues, vulnerabilities, exploitation methods and the adverse effect of them on connected autonomous vehicles and propose a novel benchmark framework focusing on physical and communication-based attack to evaluate and assets the state-of-the-art technologies that are currently use during cyber-attack.
Article
Full-text available
Driving intelligence tests are critical to the development and deployment of autonomous vehicles. The prevailing approach tests autonomous vehicles in life-like simulations of the naturalistic driving environment. However, due to the high dimensionality of the environment and the rareness of safety-critical events, hundreds of millions of miles would be required to demonstrate the safety performance of autonomous vehicles, which is severely inefficient. We discover that sparse but adversarial adjustments to the naturalistic driving environment, resulting in the naturalistic and adversarial driving environment, can significantly reduce the required test miles without loss of evaluation unbiasedness. By training the background vehicles to learn when to execute what adversarial maneuver, the proposed environment becomes an intelligent environment for driving intelligence testing. We demonstrate the effectiveness of the proposed environment in a highway-driving simulation. Comparing with the naturalistic driving environment, the proposed environment can accelerate the evaluation process by multiple orders of magnitude.
Article
Full-text available
Today, perception solutions for Automated Vehicles rely on sensors on board the vehicle, which are limited by the line of sight and occlusions caused by any other elements on the road. As an alternative, Vehicle-to-Everything (V2X) communications allow vehicles to cooperate and enhance their perception capabilities. Besides announcing its own presence and intentions, services such as Collective Perception (CPS) aim to share information about perceived objects as a high-level description. This work proposes a perception framework for fusing information from on-board sensors and data received via CPS messages (CPM). To that end, the environment is modeled using an occupancy grid where occupied, and free and uncertain space is considered. For each sensor, including V2X, independent grids are calculated from sensor measurements and uncertainties and then fused in terms of both occupancy and confidence. Moreover, the implementation of a Particle Filter allows the evolution of cell occupancy from one step to the next, allowing for object tracking. The proposed framework was validated on a set of experiments using real vehicles and infrastructure sensors for sensing static and dynamic objects. Results showed a good performance even under important uncertainties and delays, hence validating the viability of the proposed framework for Collective Perception.
Article
Full-text available
This paper presents a novel modeling method for the control design of autonomous vehicle systems. The goal of the method is to provide a control-oriented model in a predefined Linear Parameter Varying (LPV) structure. The scheduling variables of the LPV model through machine-learning-based methods using a big dataset are selected. Moreover, the LPV model parameters through an optimization algorithm are computed, with which accurate fitting on the dataset is achieved. The proposed method is illustrated on the nonlinear modeling of the lateral vehicle dynamics. The resulting LPV-based vehicle model is used for the control design of path following functionality of autonomous vehicles. The effectiveness of the modeling and control design methods through comprehensive simulation examples based on a high-fidelity simulation software are illustrated.
Article
Because there is now so many Internet of Things–based service providers globally, it will be hard to choose an Internet of Things service that is appropriate for the demand from the huge pool of Internet of Things services that are already available and display comparable characteristics. When making an acceptable choice, one can take into account the quality-of-service, or QoS, factors that characterize a certain service. In this article, we consider the Internet of Things to be the combination of its three3 potential parts, which are things, a connectivity unit, and a computational object. A definition of an IoT may contain the quality of service metrics for every one of these elements. We suggest a methodology that creates utilizes multi-criteria decision-making (MCDM) as a known approach using the MABAC method for the goal of carrying out the choice process where the quality of service parameters of different components of the internet of things act as criteria. Together, the data and our demonstration of the efficiency of the suggested strategy form a coherent whole.
Article
Localisation and navigation of autonomous vehicles (AVs) in static environments are now solved problems, but how to control their interactions with other road users in mixed traffic environments, especially with pedestrians, remains an open question. Recent work has begun to apply game theory to model and control AV-pedestrian interactions as they compete for space on the road whilst trying to avoid collisions. But this game theory model has been developed only in unrealistic lab environments. To improve their realism, this study empirically examines pedestrian behaviour during road crossing in the presence of approaching autonomous vehicles in more realistic virtual reality (VR) environments. The autonomous vehicles are controlled using game theory, and this study seeks to find the best parameters for these controls to produce comfortable interactions for the pedestrians. In a first experiment, participants’ trajectories reveal a more cautious crossing behaviour in VR than in previous laboratory experiments. In two further experiments, a gradient descent approach is used to investigate participants’ preference for AV driving style. The results show that the majority of participants were not expecting the AV to stop in some scenarios, and there was no change in their crossing behaviour in two environments and with different car models suggestive of car and last-mile style vehicles. These results provide some initial estimates for game theoretic parameters needed by future AVs in their pedestrian interactions and more generally show how such parameters can be inferred from virtual reality experiments.
Article
This work reconsiders the concept of community-based trip sharing proposed by Hasan et al. (2018) that leverages the structure of commuting patterns and urban communities to optimize trip sharing. It aims at quantifying the benefits of autonomous vehicles for community-based trip sharing, compared to a car-pooling platform where vehicles are driven by their owners. In the considered problem, each rider specifies a desired arrival time for her inbound trip (commuting to work) and a departure time for her outbound trip (commuting back home). In addition, her commute time cannot deviate too much from the duration of a direct trip. Prior work motivated by reducing parking pressure and congestion in the city of Ann Arbor, Michigan, showed that a car-pooling platform for community-based trip sharing could reduce the number of vehicles by close to 60%. This paper studies the potential benefits of autonomous vehicles in further reducing the number of vehicles needed to serve all these commuting trips. It proposes a column-generation procedure that generates and assembles mini routes to serve inbound and outbound trips, using a lexicographic objective that first minimizes the required vehicle count and then the total travel distance. The optimization algorithm is evaluated on a large-scale, real-world dataset of commute trips from the city of Ann Arbor, Michigan. The results of the optimization show that it can leverage autonomous vehicles to reduce the daily vehicle usage by 92%, improving upon the results of the original Commute Trip Sharing Problem by 34%, while also reducing daily vehicle miles traveled by approximately 30%. These results demonstrate the significant potential of autonomous vehicles for the shared commuting of a community to a common work destination.