ArticlePDF Available

Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle

Authors:
International Journal of Mathematical, Engineering and Management Sciences
Vol. 8, No. 5, 943-965, 2023
https://doi.org/10.33889/IJMEMS.2023.8.5.054
943 | https://www.ijmems.in
Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
Brain Ndumiso Ndlovu
Department of Industrial and Systems Engineering,
University of Pretoria, CNR Lynnwood Road and, Roper St, Hatfield, 0083, Pretoria, South Africa.
Email: bn.ndlovu@up.ac.za
Michael Ayomoh
Department of Industrial and Systems Engineering,
University of Pretoria, CNR Lynnwood Road and, Roper St, Hatfield, 0083, Pretoria, South Africa.
Corresponding author: michael.ayomoh@up.ac.za
(Received on March 1, 2023, Accepted on June 13, 2023)
Abstract
The reliability of autonomous vehicles (AVs) is a research domain of high interest, covering a diverse pool of researchers,
captains of smart auto industries, government agencies, and technology enthusiasts. The reliability of AVs is not extensively
explored in the literature, despite the apprehension due to fatal accidents recorded in the past. Despite being in existence for over
a decade, AVs have yet to reach a certified commercial-level deployment. Due to the complexity that comes with the self-
operation of an AV, the issue of trustworthiness, which signifies reliability, becomes inevitable. The identification, analysis, and
categorization of functional elements using systems engineering conceptual design principles and the linkage of these to the road
traffic rules were conducted to address this. Also, the evaluation of the reliability of AVs using various developed vehicles from
selected industries was addressed by integrating the traffic rules. The analysis of reliability was carried out using life-to-failure
data premised on the probability plotting approach. It was found that there is a 99.94% chance that an autonomous vehicle will
fail at least one of the traffic rules within 20 minutes of driving. Furthermore, the hazard rate of AVs was found to be on the rise,
meaning a high indication of accidents.
Keywords- Autonomous vehicles, Functional capability, Physical embodiment, Reliability analysis, Systems engineering
conceptual design.
1. Introduction
Autonomous vehicle (AV) performance in respect of adherence to traffic rules is still a major concern in
the literature on self-driving robotic systems (Uzair, 2021). This is impacted by the functional capabilities
of an AV, which mostly extend to questions related to its reliability. Grigorescu et al. (2020) and Chy et
al. (2021) stated that the capabilities of Autonomous Vehicles (AVs) have over the years transformed and
improved how scenarios along a driving route should be predicted, including reactions to complex,
unknown, or unforeseen situations. Automotive industries involved with the planning, design, and
manufacture of AVs, members of academia, researchers, and students in the fields of robotics, smart
systems, reliability engineering, and government agencies, amongst others, are the main targets of this
research. The concept of AVs and their development gradually advanced to some level of fame over two
decades ago. The advancement of AVs was due to advances in artificial intelligence and its sub-organs,
such as deep learning. The emerging technology that comes with AVs is complex and risky, spanning
across geopolitical and socio-economic domains (Tan and Taeihagh, 2021). With this in consideration,
any nation that chooses to introduce AVs may experience difficulties, especially among developing
economies. This is based on the fact that most developing economies lack the Fourth Industrial
Revolution (4IR) technology while the technology associated with AVs is getting much more advanced
beyond the 4IR technology (Ndung’u and Signé, 2020). Hence, if there is a lack of technological
competency, there will be issues with any AV developed in such societies. The inherent complexity of an
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
944 | Vol. 8, No. 5, 2023
AV makes it difficult to comply with certain conditions within a certain period. This inherent complexity
also makes it difficult for an AV to be deployed in certain societies.
One of the most notable problems facing today’s AVs is the inherent operational and functional
complexities that are orchestrated by the employed technology, which revolves around diverse-multiple
interacting sensors, the need for real-time decision-making to avert bumpy rides, and mostly minimize or
eliminate accidents. AVs are more involved in activities seen as complex as they try to imitate human
dexterity in driving. These sets of actions require an AV to trigger itself and make quick, smart, and
reliable decisions accurately and precisely.
According to Grigorescu et al. (2020) and Chy et al. (2021), AV research has over the years grown,
especially in respect of how they predict situations and how they should react to complex and unknown
situations or unforeseen situations. The improvements recorded were possible due to the method called
deep learning or smart deep learning and the advancements of Artificial Intelligence (AI) technology and
its applications. Given these advancements, it can be noted that there is good progress in the field of AVs,
however, their reliability is still a problem, and there is little research regarding the reliability on the road.
Despite the recorded advances with AVs, situation prediction still poses a significant challenge in real-life
scenarios. Decision-making by AVs depends on the method used to formulate their navigation strategies
for various situations (Schwarting et al., 2018). By extension, the reliability of an AV’s performance with
respect to traffic rules is largely unknown. This makes it difficult to trust the technology that these
vehicles rely on. No substantial evidence of reliability analysis is provided in the literature on AVs with
respect to on-road autonomous navigation. Due to the questionable reliability of this emerging knowledge
domain, the delay in commercializing AVs has been upheld as the issue of trustworthiness has remained
in the spotlight.
The literature in the field of AVs has significantly grown over the years. In a bid to detect objects,
Shelhamer et al. (2017) and Wu (2017) discussed the Convolutional Neural Network (CNN) approach as
a promising real-time object detection tool. CNN can be defined as a multilayer neural network which can
also be referred to as deep learning architecture and it makes use of Artificial Intelligence (AI) detective
principles. The visual system of living beings inspired CNN as it is commonly used to analyze images
(Ghosh et al., 2020). The use of a CNN-based approach was reviewed by Hnewa and Radha (2021). It
was found that the CNN-based approach performed well in clear weather conditions in respect of object
detection. However, rainy weather conditions yield less accurate detection of objects than clear weather
conditions. However, this does not mean that the method does not work in rainy conditions. The problem
that caused less accuracy in rainy conditions was the inability to detect and locate the objects as expected
at some point. This is caused by the rain covering, which aids in obscuring the important details of the
objects (Hnewa and Radha, 2021). The main performance metric used in the test is the Mean Average
Precision (MAP) which is said to be the most popular performance measure since 2012.
Vaicenavicius et al. (2021) highlighted one important functional requirement, i.e., the ability of an AV to
stop in a bid to avoid harm or danger. This requirement sums up a couple of requirements. Badue et al.
(2021) surveyed a search on self-driving vehicles that focused on vehicles with the autonomous driving
capability of level 3 and above. The study identified two main functional requirement categories: the
perception system and the decision-making system of an autonomous vehicle. The perception system of a
self-driving vehicle consists of the following functional requirements, (1) the vehicle should have
different methods of localization via Light Detection and Ranging (LIDAR) based localization, LIDAR
plus camera-based localization, and camera-based localization; (2) the vehicle should be able to map
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
945 | Vol. 8, No. 5, 2023
obstacles offline via regular spacing metric representation and varied spacing metric representation; (3) an
AV should be able to conduct road mapping using metric representation and topological representation;
(4) an AV should be able to track moving obstacles using traditional-based Ministry of Transport (MOT),
model-based MOT, stereo vision based MOT, grid map based MOT, sensor fusion based MOT, deep
learning based MOT and (5) an AV should be able to detect and recognize traffic signalization. These can
be achieved by traffic light detection and recognition, traffic sign detection and recognition, and pavement
marking detection and recognition.
General decision-making by AVs is still a source of concern and can be occasionally immature. AVs can
plan how to behave or react in normal and sometimes complex situations. However, complex situations
are usually more difficult to handle; hence, mistakes occur. Complex situations in decision-making come
with a dynamic environment. The dynamic environment brings uncertainty to data acquisition handling.
Data acquisition helps in understanding an environment in real-time to plan appropriately, to avoid
dangerous situations. Data acquisition and analysis in real-time are still challenges (González et al.,
2015).
In respect of decision-making for a self-driving vehicle, Badue et al. (2021) described the following
functional requirements that a vehicle should be enabled to meet: (1) conduct route planning the vehicle
can achieve this by using the following techniques goal-directed, separator-based, hierarchical, and
bounded-hop, or combining any of the techniques; (2) select its expected behavior the techniques that
can be adopted for this requirement are Finite State Machines (FSM) (Jo et al., 2015), ontology (Zhao et
al., 2015; Zhao et al., 2017), and Markov decision process; (3) plan its motion the motion planning
consists of graph search, sampling, an interpolating curve such as clothoid curves (González et al., 2015),
and numerical optimization techniques; (4) control its systems the methods used for this are direct
hardware actuation control and path tracking.
The identification of functional capabilities for AVs was discussed by Matthaei and Maurer (2015),
Vaicenavicius et al. (2021) and Badue et al. (2021). The functional requirements are needed to make sure
that the capabilities are met. These requirements speak to the AVs’ intelligence; therefore, they are
categorized into two main systems. The perception systemthe vehicle should always know how to
identify its environment, and the cognition system for decision-making, such as what the AV should do
and how it should do it. Sviatov et al. (2021) described a structural and functional model of an
autonomous vehicle control system, intending to generate several mathematical problems. Their study
identified the control system of an autonomous vehicle as a functional requirement. This requirement
refers to the vehicle being able to control itself. Other functional design structures include those
developed by Badue et al. (2021), Guanetti et al. (2018) and Sell et al. (2018).
The study conducted by Guanetti et al. (2018) highlighted that decision-making and motion planning of
Connected and Automated Vehicles (CAV) generated a reference trajectory for longitudinal and lateral
motion. Therefore, the trajectory is expected to follow traffic rules, be feasible for lower-level controllers,
and be comfortable for the passengers. Furthermore, it should be capable of accurately following high-
level directions (Paden et al., 2016; Guanetti et al., 2018). The ultimate goal in decision-making is for the
autonomous vehicle to move from a given point A to another point B without accidents. However, this
has been an issue to achieve without the occurrence of accidents. Consequently, problem formulation
related to decision-making had to be conducted to minimize the number of hazardous situations and
ultimately reduce the number of accidents. (Paden et al., 2016; Guanetti et al., 2018).
Sensors play one of the most important roles in the performance of an AV. They provide data for the
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
946 | Vol. 8, No. 5, 2023
preceptory system to make the vehicle act or react as expected. Therefore, sensors’ performance must be
very accurate to avoid mistakes that would injure passengers, pedestrians, and the environment (Yan et
al., 2016; Vargas et al., 2021; Yeong et al., 2021). Furthermore, the type and placement of sensors that
allow an autonomous vehicle to perceive its surroundings are crucial to the performance of an AV. With
these sensor placements, the vehicle is expected to provide its best performance (Vargas et al., 2021).
An AV’s performance depends on several aspects embedded in the vehicle. In terms of sensor
functionality issues, it poses a significant risk if there is a functional error such that the vehicle's decision
is incorrect due to incorrect sensor(s) readings. Such an error could cause a fatal accident, for example, if
the autonomous vehicle detects a pedestrian as not moving. While moving, the vehicle was supposed to
stop at that moment but did not. Therefore, the sensors (especially the quality of data based on sensor
fusion) have played a very important role in any AV performance. However, there are other very
important aspects, such as the functional design structure, that analyze the collected data with the aid of
sensors. Therefore, the performance of AVs proves to be a complex aspect to be measuredthat is why it
is crucial to measure how they are capable of obeying traffic rules.
According to Yeong et al. (2021), sensors are tools that translate environmental events or changes into
quantitative measurements that can be processed further. Typically, sensors are divided into two kinds
based on their core principles of operation. Firstly, the proprioceptive sensors, also known as internal state
sensors, record a dynamic system’s state and internal values. These sensors relate to encoders, inertia
measurement units, inertial sensors (magnetometers and gyroscopes), location sensors (Global Navigation
Satellite System), and receivers, such as the Global Positioning System. Secondly, the exteroceptive
sensors, also known as external state sensors, sense and gather data such as distance measurements or
light intensity from the system’s surroundings. These sensors relate to cameras, ultrasonic sensors, Radio
Detection and Ranging (RADAR), and LIDAR onboard the AV. The sensing of the environment,
tracking, and localization of the AVs for trajectory planning and decision-making is a prerequisite for
directing the navigation of the vehicle.
Six different driving automation levels were presented by Singh and Saini (2021). The six different levels
of driving automation are grouped into two main categories: human drivers and automated driving
systems. The descriptions of the different levels are further described as follows:
Level 0: There is no automation of any sort, the driver performs all the tasks.
Level 1: There are at least stand-alone vehicle components, such as automated braking; here, the driver
assists in many operations.
Level 2: There is partial automation such that the vehicle is capable of steering and accelerating by
itself to keep the vehicle accurately in the lane(s) and adaptively moving around other vehicles.
However, the human driver should always be there to monitor the operation.
Level 3: There is conditional automation such that the human driver can take total control in certain
complex situations; that is, the vehicle can drive itself in less complex situations until there is a need for
human intervention.
Level 4: There is high automation control in the vehicle, such that it can perform all needed driving
functions by itself. Such vehicles might provide options for human intervention or might not provide it.
Level 5: There is full automation such that the vehicle can perform all driving functionalities in any
given situation (complex or easy) and condition.
According to Denoël (2007), the reliability index of an AV is calculated as the difference between the
mean failure condition and its standard deviation. Numerous analyses used to assess and enhance the
quality of goods, services, and systems are together referred to as reliability analysis. Although the term
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
947 | Vol. 8, No. 5, 2023
reliability can refer to a products or a system’s overall performance in a generic sense, reliability is a
particular measure that can be quantitatively evaluated in engineering disciplines.
According to Uzair (2021), early accidents have been recorded for AVs. There have been more than 30
accidents recorded since 2014, and about five AV passengers and pedestrians were killed by the AVs.
This raised concerns among the public as an outstanding reliability challenge associated with AVs.
However, these fatal accidents buttress the need for AV reliability studies for the ultimate reduction in
accidents. All the decisions made by a full AV are directly based on the data gathered by the sensors and
analyzed. Therefore, the sensors must function as expected to avoid obvious disasters (accidents). Now, if
any of the sensors in an AV fail or provide unclean and unclear data, then that will be a big problem.
Unfortunately, the sensors gather dirty data when there is a bad or abnormal weather condition (such as
snow, heavy rain, etc.); this can occur in any sensor category (self-sensing, localization, and surrounding
sensing). Human drivers also experience similar problems when bad or abnormal weather conditions
occur (Ma et al., 2020). In their assessment (AVs) of the tenacious behavior of onboard sensors, Yan et al.
(2016) examined some of the sensors by analyzing jamming and spoofing attacks in their physical
channels. In the jamming attack, the sensors are made to withstand the environmental noise that occurs
during typical working circumstances. In the spoofing attack, when sensors are positioned incorrectly, it
is possible to get real physical signals from the incorrect source. Consequently, the discussion provided
hardware and software countermeasures that may strengthen sensor resilience against attacks.
Even though AVs seek to be accident-free in their navigation ploy, this mission is still unrealistic. Hence,
accountability for accidents has remained a big issue in AVs. Hence, the legal framework and regulations
are integrated; this is one of the most important requirements for autonomous vehicle deployment (Singh
and Saini, 2021). The main question in this situation is who should be held liable for either fatal or
nonfatal accidents. This question does not have a straightforward answer; according to Borenstein et al.
(2019), if there was an accident that involved AV, it does not make sense that the technology itself can be
held responsible, but the designers, car dealer(s), manufacturers, and other people that could be identified
as guilty. This claim supports what Mackie (2018) stated, that human drivers should remain liable for the
accident depending on the automation (and the degree to which the human has control over the event(s)
that led to the accident) installed in the vehicle. Suppose the automation is at level 4 or 5 (highly or fully
automated). In that case, the plaintiffs are responsible for identifying who should be held accountable,
which could be the manufacturers, maintainers, or others who contributed to the AV’s development.
Despite having six different levels of automation for AVs according to Casado-Herráez (2020) and Singh
and Saini (2021), this paper has focused on the top two levels, i.e., levels 4 and 5, of the automation
hierarchical order of intelligence. Levels 4 and 5 possess the latest technological capabilities (i.e., the
ability to self-drive from point A to B), including the technologies found in the lower levels (i.e., the
ability to self-park). Therefore, the need to analyze the sensors that bring about the automation of the
vehicles was conducted. Considering levels 4 and 5, it is crucial to look at how such systems (in terms of
the functional capabilities that speak to intelligence) are designed. With a focus on how levels 4 and 5
AVs are designed, the needs analysis theory of systems engineering, was adapted to explore the
conceptual design of systems with a specific focus on functional capabilities. In a nutshell, this study aims
to objectively address the reliability of the intelligence of the AV with respect to traffic rules during
navigation. Given the research aim, the outlined objectives include the delineation of the functional
capabilities of AVs with respect to the intelligence of an AV, the modeling and analysis of the reliability
of the intelligence of autonomous vehicles with respect to identified traffic rules, and the identification
and analysis of the inherent complexity drivers that cause unreliability in AVs. Consequently, the
reliability of the intelligence of AVs was conducted with a focus on some of the available vehicle brands
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
948 | Vol. 8, No. 5, 2023
in the AV industry. The analysis of these brands is provided in subsequent sections. Even though two
types of K53 traffic rules were described by Hoole (2013), there are three types of traffic rules. The third
rule focuses on knowing the controls of a vehicle, and an AV is assumed to know what controls it has and
how to use them.
The advancement of AVs provides good news for the smart automotive industry; however, there are some
potential impacts on factors such as employment, privacy, equity, etc. Considering the employment
factor, the automotive industry might need more employees that understand the development of
autonomous vehicles; in this case, the employment rate might increase. When considering employment
related to driving, the employment rate might drop significantly. This is because most AVs are developed
to provide services like that of Uber’s (transporting passengers). It seems the technology will be applied
to trucks as well; therefore, Uber drivers (or drivers with similar services) and truck drivers will be
replaced by AVs. Though the employment predicament seems to favor the negative, there is an issue of
privacy. Though services provided by companies such as Uber require passengers to have existing
profiles, the AVs in addition have several cameras that will be watching the passengers, and payment may
have to be always online (lack of flexibility). Some individuals might not prefer their credit or debit card
details to be linked to online profiles, but that is not possible with AVs since they are void of human
drivers to assist with cash transactions at the end of a trip. Nevertheless, the highlighted socio-centric
factors are not considered to have any impact on the reliability of AVs based on the approach adopted in
this paper for the computation of an AVs reliability. However, these factors only highlight the possibility
of new policies being introduced to the social system of human endeavor to keep a good balance in the
economy.
In light of the identified research gap, this paper has focused on addressing the reliability analysis of AVs
by first exploring some functional attributes deemed significant for traffic rule adherence. The functional
attributes or capabilities of an AV are the basis for the display of its intelligent behavior. The
identification process of the functional attributes was facilitated by first creating a graphic scenario that
depicts a typical outdoor road navigation situation, as presented in Figure 1 of the following section.
Hence, the identification and analysis of the functional attributes of AVs were addressed prior to
conducting a reliability analysis of AVs with respect to road traffic-rules. The motivation for this study is
premised on the scanty literature resources in the mentioned problem domain. This study aims to
objectively understudy the reliability of the intelligence of AVs amidst the inter- and intra-complexities
associated with autonomous ground vehicle navigation. The associated complexities are orchestrated by
the diversity of navigation requirements on the road, intelligent interactions (inter- and intra-interaction),
and a need for swift decision-making.
The organization of the paper going down will first discuss and analyze the functional attributes and
physical embodiment identification of AVs in Section 2. This would be followed by the reliability
analysis of an AV premised on traffic rules in Section 3 while the concluding remarks and future work
will be presented in Section 4.
2. Functional Attributes and Physical Embodiment Identification and Analysis
This section is focused on discussing the research approach deployed to address the identified problem.
This ranges from discussions revolving around the needs and requirements as the core phase in the
systems engineering conceptual design where functions originated. This continues with discussions
revolving around the actual identification and enumeration of functions, the corresponding physical
embodiments, and the fusion of these embodiments.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
949 | Vol. 8, No. 5, 2023
2.1 Needs and Requirement Analysis Concept
In a bid to address the reliability of AVs, the functional capabilities of AVs were first itemized and
analyzed. This section presents an exploration of the systems engineering approach for functional element
identification and analysis premised on the life cycle of a system. The life cycle of a system has four
phases, and the first phase, which is the needs analysis phase, is concerned with the formulation of
functional attributes and capabilities of a system. Kossiakoff et al. (2020) described how systems
engineering theory and method can be applied to needs and requirements analysis. Needs and
requirements analysis is the first phase in the origination of a new system that is either driven by
technological opportunity or identified by new societal needs. This study is focused on addressing a
problem driven by technological opportunities, i.e., AV born out of the emergence of Artificial
Intelligence (AI). The needs and requirements analysis has two inputs, viz., operational deficiencies and
technological opportunities, resulting in two corresponding outputs, viz., the system’s operational
effectiveness and system capabilities. One of the outputs i.e., system capabilities, was explored to provide
functional capabilities for AVs in this research.
The needs and requirements analysis phase has four activities that should be considered during its
execution (Kossiakoff et al., 2020); these activities are briefly discussed as follows.
Operations Analysis: This activity is also known as requirement analysis in the needs analysis phase.
It involves the identification of both the operational objectives and the system’s capabilities.
Functional Analysis: This activity is also known as functional definition. Herein, the operational
objectives are translated into functions, while the functions are categorized and allotted to subsystems.
The results of this activity include a listing of functional requirements for the system under
investigation. The functions allotted to subsystems are subsequently assigned to physical components.
Feasibility Definition: This activity is also known as physical definition. Herein, the physical nature
of the subsystems is visualized to check if they can perform the required functions. Furthermore, a
feasibility concept is defined with consideration given to the costs and capabilities of the
system/subsystem/component. This activity’s results are a list of initial physical embodiments that can
facilitate the actualization of the identified and listed system functions.
Needs Validation: This activity is also known as design validation. It is concerned with the setting up
of a model or some form of a validation criterion capable of checking the validity of the suggested
solution(s) and the relationship between the objectives and functions, functions and sub-systems,
functions and physical elements, etc.
The above-stated activity-centric steps have been utilized to create the operational objectives and system
functional capabilities and identify the corresponding physical embodiments for an AV. Furthermore, two
research questions were formulated herein to guide the function formation exercise and keep it within the
desired scope. The questions include:
Research question 1: What are the functional capabilities of AVs that are related to the intelligence of
an AV?
Research question 2: What are the physical embodiments required to facilitate the actualization of these
functional attributes?
Research question 3: What is the degree of success recorded by an AV while exhibiting driving
intelligence on a busy road via its functional attributes?
2.2 Autonomous Vehicle’s Functional and Physical Requirements
Functional requirements, also known as functional capabilities, refer to attributes a system should exhibit
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
950 | Vol. 8, No. 5, 2023
based on the tasks or activities it is expected to perform during its operations (Kossiakoff et al., 2020).
Matthaei and Maurer (2015) conducted a study to present a functional system architecture for an
autonomous vehicle. The study was developed in a top-down approach based on the functional
requirements of autonomous vehicles, and these requirements are described as follows:
Operating: The vehicle needs instructions (these refer to the mission of the vehicle), and usually,
human beings write out these instructions.
Mission accomplishment: Now that the mission has been described, the vehicle should be able to
accomplish the mission or the instructions, which include behavior, navigation, and control of the
actuators.
Map data: This data is required for route planning.
Localization: The vehicle should know its location or position on a global scale for mapping data, such
as navigation, and the purpose of communication in vehicle-to-vehicle or vehicle-to-infrastructure
communication.
Environmental perception: The vehicle should know its environment, whether it is stationary or
moving, and it is expected to know the dynamic of the moveable elements.
Cooperation: The vehicle is expected to respond as required in such a way that it reacts accordingly
based on other traffic participants. The vehicle should also communicate its intentions to those other
traffic participants.
Safety: The vehicle is expected to cause no harm or danger to its environment.
Self-perception: The vehicle is expected to always know its state, it should know its state in terms of its
motion and functional capabilities.
In light of the above, and based on the principles of architecture premised in the structure of systems as
depicted herein, {{System=>Sub-system(1),Sub-system(2),….,Sub-system(n-1), Sub-system(n) =>
Component(1), Component(2),….,Component(n-1), Component(n)=>Sub-component(1), Sub-
component(2),…..Sub-component(n-1),Sub-component(n)=> Part(1), Part(2),….Part(n-1),Part(n)}} as
anchored in the systems engineering approach, the physical embodiment responsible for the actualization
of the functions would form the next item of discussion. The physical embodiments with functional
capability are often seated at the component and/or sub-component levels, while the functions are seated
at the sub-systemic level.
Furthermore, it should be noted that the hardware or physical embodiments responsible for the display of
intelligence in AVs are mostly sensors with diverse sensing capabilities. These often vary from
proximity to ranging sensors. Some commonly used sensors on AVs include the LIDAR sensor, RADAR
sensor, camera or vision sensor, and ultrasonic sensors. The need to integrate two or more of these
sensing devices, also known as sensor fusion, will be discussed subsequently. The system capabilities (the
capacity of a system to carry out a specific action or produce a desired result under a specific set of
circumstances or conditions) of an AV refer to the functional capabilities. These capabilities are identified
when systems engineering theory is applied. The identified functional capabilities are presented below.
Functions are often represented using action phrases.
(i) Ability to combine a range of sensors, including the Global Positioning System (GPS), Odometer,
Radar, LIDAR, Sound Navigation Ranging (SONAR), thermographic cameras, and inertial
measurement units, to sense their environment. This functional capability is aimed at effectively
gathering data.
(ii) Ability to control systems and analyze sensory data to determine the best routes to take, as well as
barriers and essential signage, in a more advanced manner.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
951 | Vol. 8, No. 5, 2023
(iii) Ability to detect lanes using a camera system to read the markings on the road and keep the vehicle
within its right (or safe) lane.
(iv) Ability to make safe decisions based on how other vehicles surrounding the AV are behaving using a
vehicle-to-vehicle communication technique. Whereby the AV must be aware of the position,
velocity, and trajectory of any close vehicles.
(v) Ability to use a decision-making system built into it (the AV) to make informed decisions, such as
reacting when other vehicles behave abnormally, to prevent accidents.
To provide more perspective on the circumstances or conditions AVs are expected to adhere to or would
likely be exposed to while on the road, two scenarios were created using Any Logic software, as shown in
Figure 1. These representative road scenarios present a mixed driving scenario from other road users,
covering good and bad road usage.
Figure 1. Two scenarios depicting general road signs, signals, and hazards AVs would interact with on the road.
Considering Figure 1, scenario A represents a dual carriageway whereby vehicles on a particular road
carriage only move in one direction. For example, the vehicles on the bottom road (in Scenario A) are
expected to navigate from the right end of the sketch to the left, and the vehicles on the top road are
expected to only navigate from the left end of the sketch towards the right direction. Furthermore,
scenario B represents a single carriageway. A single carriageway is a road facility with one, two, or more
lanes set up within a single road facility without any central reservation to divide traffic flow in the
opposite direction except the road line marks.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
952 | Vol. 8, No. 5, 2023
In scenario A, the AV is expected to stay in its lane before notifying other road users that it intends to
change lanes; hence, doing that intelligently is a requirement to avoid accidents. Furthermore, the AV
should be able to read the traffic light warning signs, drive at a speed limit, not misread the pedestrian
zebra crossing line, billboards, or even roadside trees for something they are not, and also obey the traffic
light rules as prompted. In this scenario, the AV does not have to worry about other vehicles that move in
the opposite direction. Regarding Scenario B, the situation could get trickier as the AV is expected to
watch out for vehicles that move in the opposite direction. If, for example, the black vehicle close to the
AV (the AV in red) decides to turn left for some odd reason, the AV is expected to react accordingly.
However, the AV in Scenario B should never cross the white solid line unless it has lost control. In
essence, AVs are expected to obey traffic rules; hence, 30 traffic rules were extracted from the created
scenario in Figure 1 in conjunction with the information obtained (Hoole, 2013). There are two types of
traffic rules. Firstly, there are the road signs, signals, and markings (Table 1), and secondly, there are the
rules of the road (Table 2).
Table 1. Road signs, signals, and markings rules utilized to assess the reliability of AVs.
Road signs, signals, and markingsThe purpose is to safely regulate traffic flow, warn drivers or motorists of the circumstances on the road
ahead, provide the useful and necessary information, and provide guidance on routes and destinations.
Rule 1
Regulatory signsmust obey.
Rule 2
Traffic signals—must obey.
Rule 3
Warning signsmust heed to avoid potential danger.
Rule 4
Hazard marker platesmust heed to avoid potential danger.
Rule 5
Information signsmust understand to react appropriately.
Rule 6
Guidance signsmust be built in the AV, for instance, using a Global Positioning System (GPS).
Rule 7
Tourism signsnot important simply because AVs must have built-in GPS which they can use to navigate to a desired
tourist’s destination.
Rule 8
Diagrammatic signsmust heed to select an appropriate lane.
Rule 9
Road surface markingsmust obey.
Rule 10
Hand signalsmust obey if it is a traffic officer and must heed if it is other motorists.
Table 2. Rules of the road utilized to assess the reliability of AVs.
Rules of the roadThe purpose of the rules of the road is to control traffic, provide safety, and safeguard everyone’s right to use the road.
Speed restrictions, lane discipline, parking, and lighting all have regulations that must be adhered to. The following traffic rules are required and
doing so will significantly lower the likelihood of roadway accidents, injuries, and fatalities.
Rule 1
The vehicle must drive on the correct side (left or right) of a two-way road.
Rule 2
The vehicle must travel on the right or left side of a one-way road if it is safe.
Rule 3
The vehicle must obey a traffic officer’s instructions over the rules of the road and road signs.
Rule 4
The vehicle must keep a following distance that is appropriate and prudent, considering the speed of the vehicle being
followed, the amount of traffic, and the state of the road.
Rule 5
Speed limit (in km per hour) of 60, 100, and 120 for when the vehicle is in an urban area, outside an urban area, and on a
freeway, respectively.
Rule 6
The vehicle should not cross over the solid driving marking (yellow or white).
Rule 7
The vehicle should drive over to the left lane and not accelerate when overtaken.
Rule 8
The vehicle should always signal its intentions in time before it executes it, and it should execute only when it is safe to do
so.
Rule 9
The vehicle should not stop on the road unless an accident had to be avoided, a traffic officer or road sign(s) had instructed,
or it was caused by an unavoidable cause (such as mechanical problems).
Rule 10
At a roundabout or mini circle, the vehicle must give way to other vehicles that approach from the right (the other vehicle(s)
should be already approaching from the right or stopped on the yield sign first). The vehicle should also know when to yield
at other intersections (such as four-way, three-way, etc.).
Rule 11
The vehicle may not enter a traffic lane or cross it if it is likely to cause a dangerous situation or disrupt traffic flow.
Rule 12
The vehicle should not turn if it will obstruct or cause danger to other traffic. Therefore, before turning, the vehicle must
move to the right lane, indicating its necessary intentions, and turn when it is safe to do so.
Rule 13
The vehicle should never park on the sidewalk or the verge. Therefore, it should park within a designated parking space.
Rule 14
The vehicle should always give way to the emergency vehicles, rescue vehicles, traffic officer’s vehicles, etc. when they
signal with the siren.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
953 | Vol. 8, No. 5, 2023
Table 2 continued…
Rule 15
The vehicle must stop for pedestrians on, or about to enter, a pedestrian crossing on its side of the road or if it is involved in
an accident.
Rule 16
The vehicle must use hookers for safety reasons only, and the hooter must be audible enough for a distance of at least 90
meters. Furthermore, the tone of the pitch should not vary for any reason.
Rule 17
The vehicle must have white headlights, they should be switched on between sunset and sunrise, and they should be
switched on if visibility is not clear at greater or equal to 150 meters.
Rule 18
The vehicle may not drive in a way that endangers the lives of other drivers or pedestrians (the vehicle will always be liable
if it hits a pedestrian regardless of who had the right to the way in the road) or damage any property.
Rule 19
The vehicle should ensure that the passenger(s) fasten the seatbelts before they start moving.
Rule 20
The vehicle should stop immediately after an accident. If someone dies or gets injured, the vehicle should not move without
a traffic officer's authorization.
Given the scenarios in Figure 1, and the traffic rules created in Table 1 and Table 2, the functional
elements of AVs, their corresponding physical embodiments, and the targeted traffic rules were identified
and matched as shown in Table 3. As earlier mentioned, functions are capabilities that AVs must possess
to facilitate their intelligent adherence to road traffic rules.
Table 3. The functional elements of AVs matched with their physical embodiments and traffic rules.
2.3 Sensor Fusion Analysis
With the functional requirements outlined, the physical requirements of AVs can also be provided, which
is a feasible functional structure. However, the sensor fusion analysis had to be conducted first to re-
design a feasible structure. When considering the nature of the AVs, more specifically what makes them
autonomous, the primary aspect is the components that gather data, i.e., the sensors. Consequently, the
analysis of sensors was utilized to reconfigure the raw data layer. The sensors of the vehicles are chosen
in such a way that optimal performance is achieved. Therefore, sensors that were specified by Yeong et
al. (2021) and Ignatious and Khan (2022) were used to select the best three for camera and LIDAR, and
the best two for RADAR (Tables 4, 5, and 6).
Table 4. The top three best-performing LIDAR sensors in terms of vertical Field-of-View (FOV), horizontal FOV,
and range.
Vertical FOV ()
Horizontal FOV ()
Range (m)
40
360
245
40
360
200
40
360
200
Functional elements
Physical embodiments
Targeted traffic rules
Visualization of road signs.
Camera sensor.
Table 1: Rule 1, Rule 3, Rule 4, Rule 5, and Rule 8.
Object detection, such as other vehicles,
pedestrians, etc.
LIDAR, RADAR, and Camera sensors.
Table 2: Rule 15
Visualization of hand signals by an officer.
Camera sensor.
Table 1: Rule 10, Table 2: Rule 3
Visualization of road surface markings.
Camera sensor.
Table 1: Rule 9, Rule 6, Table 2: Rule 7, Rule 11
Visualization of objects in 360 degrees.
LIDAR sensor.
Table 2: Rule 18
Speed detection of other vehicles.
RADAR and Camera sensors.
Table 2: Rule 4, Rule 11
Distance detection between AV and other
vehicles (s).
RADAR, LIDAR, and Camera sensors.
Table 2: Rule 11
Distance detection between the AV and
pedestrian(s).
RADAR, LIDAR, and Camera sensors.
Table 2: Rule 15
Lane detection.
Camera sensor.
Table 2: Rule 1, Rule 2,
Emergency stop due to dangerous or
potentially dangerous situations.
Ultrasonic sensor.
Table 2: Rule 15 and Rule 20
Interpretation of road signs, signals, and
markings.
Camera sensor.
Table 1: All rules.
Object Classification.
Camera and LIDAR sensors.
Table 2: Rule 11, Rule 14
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
954 | Vol. 8, No. 5, 2023
Table 5. The top three best-performing camera sensors in terms of lens baselineit provides optimal view range,
range, and lens resolution.
Camera sensor
Baseline (mm)
Range (m)
Resolution (MP)
Intel D15
55
10
3
RealSense D435
50
10
3
Framos D435e
55
0.210
2
Table 6. The top two best-performing RADAR sensors in terms of overall frequency.
RADAR sensor
Overall frequency Giga-Hertz (GHz))
Smartmicro UMRR-96 T-153
79 (usually in 77 to 81)
Continental ARS 408-21
76 to 77
Additionally, the fusion of sensors was analyzed to identify the best fusion option for the LIDAR, camera,
and RADAR sensors. Vargas et al. (2021) and Yeong et al. (2021) have already provided an analysis of
these sensors. However, this study analyzed the fusion of lesser sensors (fusion of only two sensors). This
was necessary since these sensors are expensive (especially the LIDAR). According to the Neuvition
website, Velodyne 64-line LIDAR is $80,000 (R1.5 million). The Smart micro RADAR sensor is
£2,725.00 to £2,995.00 (R55,712.92 to R61,233.10) according to the Level Five Suppliers website. The
Continental ARS 408 is between R729.06 and R13,155.87 according to the AliExpress website. Finally,
the Intel D415 costs $317.95 ( R5,750.39), RealSense D435 costs $317.50 ( R5,742.25), and the
Framos D435e costs €945.10 (R16,861.39) according to Spark fun, B & H Photo Video Audio Mouser
Electronics websites.
Table 7. The comparison of AV sensor fusion based on the comparison provided by Yeong et al. (2021).
Factors
Camera
LIDAR
RADAR
2-Fusion
3-Fusion
Range
0.5
0.5
1
1.5
2
Resolution
1
0.5
0
1
1.5
Distance Accuracy
0.5
1
1
1.5
2.5
Velocity
0.5
0
1
1.5
1.5
Color Perception (traffic lights etc.)
1
0
0
1
1
Object Detection
0.5
1
1
1.5
1.5
Object Classification
1
0.5
0
1
1.5
Lane Detection
1
0
0
1
1
Object Edge Detection
1
1
0
1
2
Illumination Conditions
0
1
1
1
2
Weather Conditions
0
0.5
1
1
1.5
Total
13
19
Good Fusion (greater than 11?)
Yes
Yes
Therefore, the analysis of two sensor fusions (camera and RADAR) and three sensor fusions (camera,
LIDAR, and RADAR) was conducted, as seen in Tables 7 and 8. The goal was to check if the two-sensor
fusion would meet the minimum requirement of fusing all factors or features of each sensor so that they
produce optimal results. Furthermore, the comparison rates (0, 0.5, 1) in Table 7 were described as
follows.
0: The sensor does not operate well in respect of the specified functional attribute.
0.5: Sensor performs reasonably well in respect of the specified functional attribute.
1: Sensor operates perfectly well in respect of the mentioned functional attribute.
It can be noted that the last row in Table 7 assesses whether the fusion of two or three sensors is good or
not. A criterion of ≥11 was used on the ground that 11 factors were assessed and all values in blocks
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
955 | Vol. 8, No. 5, 2023
representing 2-Fusion and 3-Fusion are ≥1. Furthermore, to further analyze the comparison seen in Table
8, the comparison rates (0, 0.5, 1) are described as follows,
Poor, Yes: 0
Average: 0.25
Good, 200m: 0.5
Very good, No, 250m: 1.
Table 8. The comparison of AV sensor fusion based on the comparison provided by Yeong et al. (2021).
Factors
Camera
LIDAR
RADAR
2-Fusion
3-Fusion
Range
0.5
0.5
1
1.5
2
Resolution
1
0.5
0.25
1.25
1.75
Affected by weather conditions
0
0
0
0
0
Affected by lighting conditions
0
1
1
1
2
Detects speed
0
0.5
1
1
1.5
Detects distance
0
0.5
1
1
1.5
Interference susceptibility
1
0.5
0
1
1.5
Total
6.75
10.25
Good Fusion (6)
Yes
Yes
The criteria used in the last row of Table 8 are the same as those used in Table 7. However, a value of 6
was used instead of 7 since there are seven rows because the third factor tested consisted of zeros.
Therefore, both sensor fusions will always result in a zero (sensor fusion is always poor). With the
sensors analyzed, a feasible functional design structure is in place.
3. Reliability Analysis of AVs and Traffic Rules (Contribution)
This section presents the reliability analysis of the functional embodiments associated with the exhibition
of intelligent functional capabilities in AVs. The embodiments considered in this paper are predominantly
sensor-based. The reliability analysis and assessment were with respect to AVs obeying traffic rules and
road signs while in transit.
3.1 Data Gathering and Reliability Analysis
The performance assessment herein focuses on the operational objectives. Therefore, if the AVs obey
traffic rules, that means the vehicle can meet the operational objectives discussed in the operational
objectives phase. For example, suppose an AV can transport a passenger from point A to B without any
harm to anyone or anything (successfully protecting both its passenger(s) and its external environment)
and does that consistently, it can be deduced that it obeyed all the traffic rules. These kinds of measures
will allow AVs to be commercialized in cities so that they provide the required services. Further, this
provides the opportunity for cities to have more advanced vehicles and move closer to a smart city era,
depending on the technological state of that city.
The videos in Table 9 were selected from top AV companies, while the traffic rules earlier provided in
Tables 1 and 2 were analyzed with respect to the videos to create Tables 10 and 11 for the assessment of
the reliability of the AVs. Furthermore, these rules can also be used for AVs that are not yet manufactured
(i.e., assessment based on these rules can be deployed via simulation on an ongoing AV design process or
experimental prototypes). With these rules in place, the analysis indicates which rule(s) were passed,
failed, or not tested during the validation process of the AVs. The AV results presented and analyzed here
came from different companies that made their vehicles available for public testing. The tests were
recorded and made available online in video clips. The links to the videos are provided in Table 9. The
video links can be opened with a simple click.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
956 | Vol. 8, No. 5, 2023
Table 9. Video links related to AVs of different companies.
Vehicle brand
Video link
Tesla models
Video Test 1
Video Test 2
Video Test 3
Video Test 4
Video Test 5
Video Test 6
Video Test 7
Deeproute
Video Test 1
Cruise
Video Test 1
Waymo
Video Test 1
Video Test 2
Video Test 3
Video Test 4
AutoX
Video Test 1
Video Test 2
Video Test 3
Video Test 4
Pony AI
Video Test 1
Video Test 2
Video Test 3
Video Test 4
Yandex
Video Test 1
Video Test 2
Video Test 3
3.1.1 Reliability Analysis
The purpose of this section is to provide the reliability of AV with respect to obeying traffic rules by
conducting a reliability analysis using reliability engineering and statistics theories. Therefore, the rules
outlined in Tables 1 and 2 were used to assess the AVs seen in the videos that are provided in Table 9 to
first obtain the time-to-failure data prior to addressing the AV’s reliability. It was necessary to record time
stamps at which the AVs failed to adhere to the traffic rules to apply the statistical analysis to the time
data. The first step was to assess which rule was obeyed, disobeyed, or not tested and taking into
consideration which AV company the test relates to. Furthermore, five AV companies were identified
(which are Tesla, AutoX, Waymo, Deeproute, Yandex, Pony AI, and Cruise) and analyzed, and the
outcomes can be seen in Tables 10 and 11.
Table 10. The reliability analysis of AV with respect to traffic rulespart A (road signs, signals, and markings
rules).
Vehicle brand
Road signs, signals, and markings traffic rule
Tesla Model 3
AutoX (robotaxi)
Waymo (by Google)
Deeproute
Yandex
Pony AI
Cruise
Rule 1
1
1
1
1
1
1
1
Rule 2
1
1
1
1
1
1
1
Rule 3
0
0
0
0
0
0
0
Rule 4
0
0
0
0
0
0
0
Rule 5
1
1
1
1
1
1
1
Rule 6
1
1
1
1
1
1
1
Rule 7
0
0
0
0
0
0
0
Rule 8
0
1
1
1
1
1
1
Rule 9
0
1
1
1
1
1
1
Rule 10
0
0
0
0
0
0
0
Passed
4
6
6
6
5
6
6
Table 10 represents an analysis outcome of the AVs’ performance of all five AV companies with respect
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
957 | Vol. 8, No. 5, 2023
to the road signs, signals, and marking rule type. Table 11 represents an analysis outcome of the AVs’
performance of all five AV companies with respect to the rules of the road rule type. The descriptions of
the (red “zeros”, black “zeros” and green “ones”) in Tables 10 and 11 are described as follows.
1: Rule tested and passed.
0: Rule tested and failed.
0: Rule not tested.
Table 11. The reliability analysis of AV with respect to traffic rulespart B (rules of the road).
Vehicle
brand
Rules of the road
Tesla
Model 3
AutoX (robotaxi)
Waymo (By Google)
Deeproute
Yandex
Pony AI
Cruise
Rule 1
1
1
1
1
1
1
1
Rule 2
1
1
1
1
1
1
1
Rule 3
0
0
0
0
0
0
0
Rule 4
1
1
1
0
1
1
1
Rule 5
0
1
1
0
0
0
0
Rule 6
0
1
1
1
0
1
1
Rule 7
0
0
0
1
0
1
1
Rule 8
1
1
1
0
1
1
1
Rule 9
0
1
1
1
0
1
1
Rule 10
0
1
1
1
0
1
1
Rule 11
0
1
1
1
1
1
1
Rule 12
0
1
1
1
1
1
1
Rule 13
1
0
0
0
0
0
0
Rule 14
0
0
1
0
0
0
0
Rule 15
0
1
0
1
0
1
1
Rule 16
0
0
0
0
0
0
0
Rule 17
0
0
0
0
0
0
0
Rule 18
0
1
0
1
0
1
1
Rule 19
1
0
0
0
0
0
0
Rule 20
0
0
0
0
0
0
0
Passed
6
12
11
10
6
12
12
Table 12. Total passed traffic rules by different autonomous vehicle (AV).
Vehicle brand
Tesla Model 3
AutoX (robotaxi)
Waymo (By Google)
Deeproute
Yandex
Pony AI
Cruise
Total passed
10
18
17
16
11
18
18
%
0.3333
0.6000
0.5667
0.5333
0.3667
0.6000
0.6000
As observed in Table 12, AutoX (robotaxi), Pony AI, and Cruise have the same traffic rule pass, and they are the
highest, i.e., they are the top three AV companies that are currently doing well. The Tesla Model 3 was found to be
the least-performing AV. Therefore, Tesla was eliminated in further reliability analysis as it was concluded to have
an autonomous level of less than four. The Tesla AVs tested did not make use of LIDAR sensors, which provide a
360-degree view. That could be one of the factors that contributed to its poor performance. Additionally, the time
stamps were recorded every time the vehicle disobeyed any traffic rules as earlier outlined in Tables 10 and 11. The
time-to-failure dataset gathered is presented in Table 13. As shown, Table 13 provides the sample size (n=33) of the
time-to-failure, as one AV could fail at least one traffic rule more than once. To gather the time-to-failure in Table
13, the following assumptions were made.
The maximum timestamp considered from the videos was 20 minutes, and there was no minimum
timestamp considered. This assumption was created so that there is a limitation as to how long each test
was conducted so that a hypothesis test can be formulated. Though no hypothesis test was formulated, it
should be noted that the reliability analysis focused on disproving if the AVs would fail at least one of
the traffic rules in 20 minutes.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
958 | Vol. 8, No. 5, 2023
The AV companies are disregarded, i.e., all AV companies’ vehicles are regarded as different AVs
tested. This assumption was created since the reliability of AVs (not AV companies) was to be
addressed.
All AVs tested have all the important sensors onboard, i.e., LIDAR, RADAR, cameras, and ultrasonic
sensors.
Table 13. Time-to-failure of AVs observed in the analyzed videos.
Number of observations (n)
Time-to-failure (ti, in minutes)
1
0.19
2
0.32
3
0.32
4
0.40
5
1.04
6
1.14
7
1.50
8
1.54
9
2.08
10
2.10
11
2.13
12
2.15
13
2.21
14
2.29
15
2.46
16
2.52
17
2.56
18
2.59
19
3.14
20
3.37
21
3.47
22
4.41
23
5.03
24
5.29
25
5.47
26
5.57
27
6.24
28
6.55
29
7.22
30
7.39
31
11.04
32
13.37
33
17.07
It is important to note that some of the individuals that provided the videos were biased towards their
products (making it seem as if they did not make mistakes), such as the Deeproute company. However,
the kind of bias found in the videos focused on how well the AVs were trying to make successful
decisions rather than obeying traffic rules. Hence, assessing if the AVs obey traffic rules is a better way to
assess the reliability of an AV. This assessment reflects the AVs’ overall performance and safety in the
cities. However, since there were biases noted in the testing of other AVs, the reliability calculated later in
the paper is not a 100% reflection of the AVs’ performance, but it is close enough as some of the biases
were countered by the nature of the assessment conducted on the videos.
Prior to conducting the reliability analysis, a couple of mathematical symbols and notations would be
introduced, as presented in Table 14. While some of these symbols are variables, others are parameters.
However, each of these symbols would be described as they are utilized in the modeling process.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
959 | Vol. 8, No. 5, 2023
Table 14. Definitions of mathematical symbols for reliability analysis.
Symbol
Definition
τ
t-zero, which represents the location parameter, describes the shifting of the scale parameter from the origin.
β
Beta, which represents the shape of a graph.
α
Alpha measures the reliability of the scale such that it communicates how strong is the internal consistency, i.e., it tells
how consistently items were measured.
η
Eta, which represents the scale parameter of the Weibull, describes how the time (t) parameter ages.
λ
Lambda, which is calculated as 1/mean.
ln
Natural logarithm.
n
Sample number (or number of observations).
t
Time at which an item (AV) failed to adhere to traffic rule(s).
i
Order number of failed items.
To conduct the reliability analysis of the AVs, the distribution of the time-to-failure dataset had to be
evaluated so that a reliability analysis technique could be selected. The distribution of the time-to-failure
dataset was found to follow a Weibull distribution with shape parameter β < 1, with a mean value of 4.066
minutes (Figure 2).
Figure 2. Time-to-failure distribution.
The Weibull distribution is widely used in the reliability analysis of a wide variety of systems due to its
shape-adaptable ability. This shape-changing ability of the Weibull distribution such that it takes the form
of another distribution type is referred to as a special case. The Weibull distribution has been found to
appear in five different forms, with the three-parameter and two-parameter being the two common forms
(Hallinan Jr, 1993; Lai et al., 2006). The three-parameter has the τ, β, and α (or η) parameters. When the τ
= 0, the Weibull distribution is two-parameter based. The three-parameter Weibull distribution was
chosen for this study since the parameter τ was useful as the vehicle cannot fail one of the traffic rules at
zero minutes.
The reliability model was created using a probability plotting approach premised on parameter
estimation. The following are three simple ideas involved in conducting this method.
A visual representation of the data is produced on a specialized probability plotting paper (different for
each statistical distribution).
Utilize a probability plotting paper with transformed axes to ensure that a genuine Cumulative Density
Function (CDF) plots as a straight line (linearization).
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
960 | Vol. 8, No. 5, 2023
The data is deemed to suit the appropriate distribution if a straight line can fit the plotted data this can
be interpreted as an assumption.
To implement the method, the time-to-failure dataset should be linearized by calculating the median rank
of the data. The median rank is the cumulative percentage of a population in a given data sample with a
50% confidence level. To calculate the median rank, Bernard’s approximation was utilized, and the ranks
are calculated using Equation (1) (Lai et al., 2006; Firdos et al., 2020),
󰇛󰇜 󰇡
 󰇢 ()
where,
   ()
 󰇛󰇜
󰇛󰇜 ()
i = order number of failed items, 0 > i n, and n = sample size.
It should be noted that i = ti when conducting the calculations. The values produced by rti (see Equation
1) are in %, which are further used to calculate the y-axis of the Weibull probability plot. The unreliability
Equation of the three-parameter Weibull distribution is provided in Equation (4). This Equation was
linearized to produce Equation (5) by applying a double natural logarithm.
󰇛󰇜 󰇛󰇜  (4)
󰇛󰇜 󰇛 󰇜 (5)
where,

(6)
The value of τ is the value that cuts the x-axis after plotting the calculated linearized values. The value of
η is the x-axis value that cuts through the plotted graph when plotted with the value ln (−ln(1 − 0.6320)).
The value of τ can be seen as t0. The value of β is the slope of the fitted straight line. The value of λ in a
Weibull distribution is interpreted as the failure rate and calculated as seen in Equation (6). Since the λ
value is also important to evaluate the reliability of the AV, and the dataset is not large, a bootstrap
method was adopted to recalculate the λ value. A bootstrapping method is one that generates a large
number of phantom samples known as bootstrap samples by re-sampling (with replacement) from the
sample data at hand. The sample summaries for each bootstrap sample are then calculated (usually a few
thousand or thousands). The main idea of using the bootstrapping method is to conduct the same
experiment without expanding additional time and resources. Bootstrapping was carried out to calculate a
mean value that would satisfy the time-to-failure data set which would have been observed at least 10,000
times (similar to conducting a simulation run), i.e., resampling from the originally observed data set about
10,000 times. The resampling should have the same number of observations (n) and duplication is
allowed, for example, if the original data set has n=30, then the first randomly resampled data set should
have n=30. Furthermore, with the resampled data sets, anything can be calculated or addressed. In this
study the mean value is the target.
The time-to-failure data set and the Weibull plot were analyzed and plotted in RStudio software using the
R programming language. The median ranks and confidence interval calculations were conducted in
Anaconda software using the Python programming language. In consideration of the analysis of the
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
961 | Vol. 8, No. 5, 2023
linearized Weibull distribution, the RStudio software was utilized to conduct the plotting by making use
of the wblr() and wblr.fit() functions, which required an R-Package called WeibullR (see the graph in
Figure 3). Figure 3 shows a linearized Weibull plot using a special sheet of the time-to-failure data set. It
is important to note that the section with the title censored dataset provides the most important details of
the graph. The three parameters of the Weibull distribution are extracted from Figure 3 (see Table 15).
Figure 3. The linearised fitted Weibull distribution plot.
Table 15. The three-parameter Weibull distribution from the time-to-failure data.
Parameter
Value (minutes)
Shape (β)
1.1550
Scale (η)
4.2400
Location (t0 = τ)
0.9675
Given three parameters as shown in Table 15, one more parameter is required to calculate the reliability,
and this parameter happens to be the mean. To calculate the mean, a bootstrap method had to be applied
to accurately determine the mean that best describes the time-to-failure data set. Based on this, the
bootstrap was performed using the boot() function in RStudio, premised on the R programming language.
The boot() function, in this case, requires three arguments namely: the time-to-failure data set, the
sampling function (of which the R sample() function was utilized), and the number of replications or
resampling, which is 10,000 in this research. The distribution of the bootstrapped values of time-to-failure
was plotted and matched with the one in Figure 2. The new mean value (meannew) from the bootstrap was
calculated to be 4.058 minutes, and the 95% confidence interval of the mean is [2.87, 5.45]. This
signifies that a mean value of 4.058 minutes can be utilized since it is within the 95% confidence interval.
In this study, meannew = 4.058 minutes was utilized since it was calculated using a bootstrap approach,
hence, λ = 1/4.058 resulted in a magnitude of 0.2464 per minute. Values calculated from a bootstrap
approach are closer to the actual values, hence their adoption. With all these parameters in place, both
Equations (4) and (5) are fully sorted, as seen in Equations (7) and (8),
󰇛󰇜  󰇛󰇜 ()
󰇛󰇜 󰇛 󰇜 (8)
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
962 | Vol. 8, No. 5, 2023
Let x = ln(t − τ ), Equation (9) can be rewritten as follows,
󰇛󰇜 .
therefore,
  (9)
To answer the question What is the reliability of an AV in respect of traffic-rules?” Equation (7) is
expected to provide the solution. The assumption that AVs are tested for 20 minutes was utilized to define
a finite time frame to test the AVs. Therefore, the reliability of an AV at 20 minutes is calculated as
follows,
󰇛󰇜 󰇛󰇜
󰇛󰇜
 
 reliable.
Consequently, there is a 1 R(t) = 99.94% chance that an AV will fail at least one of the traffic rules in
20 minutes. This is because of the increasing hazard rate h(t). The hazard rate increases when β > 1. In
this case, β = 1.155 which implies a high hazard rate. Hence, the goal is to decrease the hazard rate, i.e.,
make (β ≤ 1).
4. Conclusion
This research will form a useful rallying point for all AV manufacturing industries, especially those
whose AV performance was analyzed herein, including captains of industries, researchers in the field of
smart systems, and policymakers on autonomous systems, amongst others. The research has focused on
providing more insight into the reliability of AVs. The reliability assessment measures addressed in this
paper focused on adherence to traffic rules. Traffic rules have been established to protect the environment,
pedestrians, and drivers or passengers in other vehicles. Therefore, if the AVs can adhere to traffic rules,
they will be seen as reliable. Even though the sub-systems that make up an AV system are not
operationally perfect, just like the humans who designed and manufactured them, the reliability of an AV
is strongly linked to the functional and operational success of its member elements. The reliability
analysis of AVs is a challenging task, and following that, all traffic rules must be obeyed and passed. If an
AV adheres to a traffic rule with a success rate of less than 100%, that AV is considered to have failed
such a rule. Hence, finding common and consistent ground in computing a feasible reliability
measurement for an AV is crucial.
In this research, some core functional capabilities of an AV were first identified and outlined by following
the systems engineering conceptual design principle. These were in turn linked to the created and adapted
traffic rules for further analysis, leading to the reliability analysis. Considering the findings herein, the
literature has pointed out that functional requirement identification is a core requirement when designing
an AV. Following that, there is a clear integration between functional requirements, the corresponding
physical elements, and the expected performance of an AV. It is significant to proceed in this sequence
when designing systems in general. When considering the design of an AV, the activities can vary among
different design processes with sensor fusion creating an important variant. This research addressed the
problem of sensor mix, through different kinds of sensor combinations. It is important to identify and
select sensor types that yield the best performance for more efficient data gathering. As a way of
addressing future work, the identified capabilities would require some real-life AV validation since there
are no confirmed data available in the literature to do this.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
963 | Vol. 8, No. 5, 2023
Furthermore, considering the reliability analysis conducted in this research, the time-to-failure data set
was gathered and analyzed to aid in the identification of a reliability analysis method, which in this paper,
resulted in the linearized Weibull plotting method. The time-to-failure results were validated through
bootstrapping, and the lambda parameter was extracted from the bootstrap results for more accuracy. The
method was applied mostly by using the R and Python programming languages. The results showed that
there is a 99.94% chance that an AV will fail at least one of the traffic rules. Furthermore, the hazard rate
was found to be increasing. Considering these findings, it can be deduced that the AVs can still not be
fully commercialized, but they are very close to getting there. One major issue of concern is that AVs
have demonstrated a high level of uncertainty when it comes to multi-tasking with decision-making on
the road in a bid to avert accidents.
For future work in a bid to extend this research, the following would be considered: Firstly, data gathering
would be improved. One way to do this is to access real AVs from different AV companies and test them
while recording the data. Also, several AVs from the same company would be assessed for more accuracy
and consistency. Even though this will be a lengthy task, it is considered significant for more effective
reliability analysis. Secondly, recalculating the reliability using different alternate methods apart from the
linearized Weibull plotting method premised on probability plotting can be an additional step in the right
direction. Finally, the need to reduce the hazard rate of AVs through the improvement of their reliability
is quite significant. To improve the reliability of AVs’, it is recommended that the developers and
manufacturers consider developing their AVs to meet the traffic rules’ requirements. One key feature that
was seen to be absent from the sampled AVs is the inability to warn pedestrians or human drivers using
the hooter, which is meant to be a requirement for safety. Meeting these requirements will surely be
challenging, as the overall performance of the AVs does depend on the integration of systems that have
their own individual, different, and specific tasks to be executed, even though they remain linked as a
whole system. Therefore, the decision-making system of AVs needs to be improved. It is strongly
recommended that AVs be developed to perform satisfactorily in level 4 of automation, i.e., validated via
testing and computation of their reliabilities with respect to traffic rules and deemed satisfactory prior to
moving to the next level of automation, level 5.
Conflict of Interest
The authors of this research wish to state that there is no any conflict of interest regarding this research.
Acknowledgments
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The
authors would like to thank the editor and anonymous reviewers for their comments that help improve the quality of this work.
References
Badue, C., Guidolini, R., Carneiro, R.V., Azevedo, P., Cardoso, V.B., Forechi, A., Jesus, L., Berriel, R., Paixão, T.
M., Mutz, F., de Paula Veronese, L., Oliveira-Santos, T., & De Souza, A.F. (2021). Self-driving cars: A survey.
Expert Systems with Applications, 165, 113816. https://doi.org/10.1016/j.eswa.2020.113816.
Borenstein, J., Herkert, J.R., & Miller, K.W. (2019). Self-driving cars and engineering ethics: The need for a system
level analysis. Science and Engineering Ethics, 25(2), 383-398.
Casado-Herráez, D. (2020). Self-driving car autonomous system overview-industrial Electronics Engineering-
Bachelors' Thesis.
Chy, M.K.A., Masum, A.K.M., Sayeed, K.A.M., & Uddin, M.Z. (2021). Delicar: A smart deep learning based self
driving product delivery car in perspective of Bangladesh. Sensors (Basel), 22(1), 126.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
964 | Vol. 8, No. 5, 2023
Denoël, V. (2007). An introduction to reliability analysis.pp. 1-82. Université de Liège, Belgium.
Firdos, M., Abbas, K., Abbasi, S.A., & Abbasi, N.Y. (2020). Modeling and comparison of maximum likelihood and
median rank regression methods with fréchet distribution. Proceedings on Engineering Sciences, 2(2), 159-168.
Ghosh, A., Sufian, A., Sultana, F., Chakrabarti, A., De, D. (2020). Fundamental concepts of convolutional neural
network. In: Balas, V., Kumar, R., Srivastava, R. (eds) Recent Trends and Advances in Artificial Intelligence
and Internet of Things. Intelligent Systems Reference Library (Vol, 172, pp. 519-567). Springer, Cham.
https://doi.org/10.1007/978-3-030-32644-9_36.
González, D., Pérez, J., Milanés, V., & Nashashibi, F. (2015). A review of motion planning techniques for
automated vehicles. IEEE Transactions on Intelligent Transportation Systems, 17(4), 1135-1145.
Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A survey of deep learning techniques for
autonomous driving. Journal of Field Robotics, 37(3), 362-386. https://doi.org/10.1002/rob.21918.
Guanetti, J., Kim, Y., & Borrelli, F. (2018). Control of connected and automated vehicles: State of the art and future
challenges. Annual Reviews in Control, 45, 18-40. https://doi.org/10.1016/j.arcontrol.2018.04.011.
Hallinan Jr, A.J. (1993). A review of the Weibull distribution. Journal of Quality Technology, 25(2), 85-93.
Hnewa, M., & Radha, H. (2021). Object detection under rainy conditions for autonomous vehicles: A review of
state-of-the-art and emerging techniques. IEEE Signal Processing Magazine, 38(1), 53-67.
https://doi.org/10.1109/msp.2020.2984801.
Hoole, G. (2013). The new official k53 manual: Motorcycles, light and heavy vehicles. Penguin Random House
South Africa.
Ignatious, H.A., & Khan, M. (2022). An overview of sensors in autonomous vehicles. Procedia Computer Science,
198, 736-741.
Jo, K., Kim, J., Kim, D., Jang, C., & Sunwoo, M. (2015). Development of autonomous carPart II: A case study on
the implementation of an autonomous driving system based on distributed architecture. IEEE Transactions on
Industrial Electronics, 62(8), 5119-5132.
Kossiakoff, A., Biemer, S.M., Seymour, S.J., & Flanigan, D.A. (2020). Systems engineering principles and practice.
John Wiley & Sons.
Lai, C.D., Murthy, D., & Xie, M. (2006). Weibull distributions and their applications. In Springer Handbooks (pp.
63-78). Springer.
Ma, Y., Wang, Z., Yang, H., & Yang, L. (2020). Artificial intelligence applications in the development of
autonomous vehicles: A survey. IEEE/CAA Journal of Automatica Sinica, 7(2), 315-329.
https://doi.org/10.1109/jas.2020.1003021.
Mackie, T. (2018). Proving liability for highly and fully automated vehicle accidents in Australia. Computer Law &
Security Review, 34(6), 1314-1332. https://doi.org/10.1016/j.clsr.2018.09.002.
Matthaei, R., & Maurer, M. (2015). Autonomous driving a top-down-approach. at - Automatisierungstechnik,
63(3), 155-167. https://doi.org/10.1515/auto-2014-1136.
Ndung’u, N., & Signé, L. (2020). The fourth industrial revolution and digitization will transform africa into a global
powerhouse. Foresight Africa Report, 5(1), 1-5.
Paden, B., Cap, M., Yong, S.Z., Yershov, D., & Frazzoli, E. (2016). A survey of motion planning and control
techniques for self-driving urban vehicles. IEEE Transactions on Intelligent Vehicles, 1(1), 33-55.
https://doi.org/10.1109/tiv.2016.2578706.
Schwarting, W., Alonso-Mora, J., & Rus, D. (2018). Planning and decision-making for autonomous vehicles.
Annual Review of Control, Robotics, and Autonomous Systems, 1(1), 187-210. https://doi.org/10.1146/annurev-
control-060117-105157.
Ndlovu & Ayomoh: Reliability Analysis of the Functional Capabilities of an Autonomous Vehicle
965 | Vol. 8, No. 5, 2023
Sell, R., Leier, M., Rassõlkin, A., & Ernits, J.P. (2018). Self-driving car ISEAUTO for research and education. In
2018 19th International Conference on Research and Education in Mechatronics (REM) (pp. 111-116). IEEE.
Delft, Netherlands.
Shelhamer, E., Long, J., & Darrell, T. (2017). Fully convolutional networks for semantic segmentation. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640-651.
Singh, S., & Saini, B.S. (2021). Autonomous cars: Recent developments, challenges, and possible solutions. In IOP
Conference Series: Materials Science and Engineering (Vol. 1022, No. 1, p. 012028). IOP Publishing. Rajpura,
India.
Sviatov, K., Yarushkina, N., Kanin, D., Rubtcov, I., Jitkov, R., Mikhailov, V., & Kanin, P. (2021). Functional model
of a self-driving car control system. Technologies, 9(4), 100. https://doi.org/10.3390/technologies9040100.
Tan, S.Y., & Taeihagh, A. (2021). Adaptive governance of autonomous vehicles: Accelerating the adoption of
disruptive technologies in Singapore. Government Information Quarterly, 38(2), 101546.
Uzair, M. (2021). Who is liable when a driverless car crashes? World Electric Vehicle Journal, 12(2), 62.
https://doi.org/10.3390/wevj12020062.
Vaicenavicius, J., Wiklund, T., Grigaite, A., Kalkauskas, A., Vysniauskas, I., & Keen, S.D. (2021). Self-driving car
safety quantification via component-level analysis. SAE International Journal of Connected and Automated
Vehicles, 4(1), 35-45. https://doi.org/10.4271/12-04-01-0004.
Vargas, J., Alsweiss, S., Toker, O., Razdan, R., & Santos, J. (2021). An overview of autonomous vehicles sensors
and their vulnerability to weather conditions. Sensors (Basel), 21(16), 5397. https://doi.org/10.3390/s21165397.
Wu, J. (2017). Introduction to convolutional neural networks. National Key Lab for Novel Software Technology.
Nanjing University. China, 5(23), 1-31.
Yan, C., Xu, W., & Liu, J. (2016). Can you trust autonomous vehicles: Contactless attacks against sensors of self-
driving vehicle. Def Con, 24(8), 109. https://doi.org/10.1145/1235.
Yeong, J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in
autonomous vehicles: A review. Sensors (Basel), 21(6), 2140. https://doi.org/10.3390/s21062140.
Zhao, L., Ichise, R., Liu, Z., Mita, S., & Sasaki, Y. (2017). Ontology-based driving decision making: A feasibility
study at uncontrolled intersections. IEICE TRANSACTIONS on Information and Systems, 100(7), 1425-1439.
Zhao, L., Ichise, R., Yoshikawa, T., Naito, T., Kakinami, T., & Sasaki, Y. (2015). Ontology-based decision making
on uncontrolled intersections and narrow roads. In 2015 IEEE Intelligent Vehicles Symposium (IV) (pp. 83-88).
IEEE. Seoul, South Korea.
Original content of this work is copyright © International Journal of Mathematical, Engineering and Management Sciences. Uses under the
Creative Commons Attribution 4.0 International (CC BY 4.0) license at https://creativecommons.org/licenses/by/4.0/
Publisher’s Note- Ram Arti Publishers remains neutral regarding jurisdictional claims in published maps
and institutional affiliations.
... Sensors embedded in the vehicle act as sensory organs, perceiving the external environment. LiDAR, radar, and cameras capture data on road conditions, obstacles, and other vehicles , Ndlovu & Ayomoh, 2023, Raj & Surianarayanan, 2020). The embedded system processes this sensory input to create a comprehensive understanding of the surroundings. ...
Article
Full-text available
The integration of embedded systems in autonomous vehicles represents a transformative paradigm shift in the automotive industry, offering unprecedented opportunities for enhanced safety, efficiency, and user experience. This comprehensive review explores the current landscape of embedded systems in autonomous vehicles, delving into emerging trends, persistent challenges, and future directions that shape the trajectory of this rapidly evolving field. The review begins by examining the foundational concepts of embedded systems in the context of autonomous vehicles, elucidating the intricate interplay between hardware and software components. It surveys the state-of-the-art technologies that empower these systems, including advanced sensors, actuators, and communication protocols, highlighting their pivotal roles in perception, decision-making, and control aspects of autonomous driving. One of the prominent trends discussed in this review is the increasing reliance on artificial intelligence (AI) and machine learning algorithms within embedded systems. The incorporation of these intelligent algorithms enables vehicles to adapt and learn from real-world scenarios, enhancing their ability to navigate diverse and dynamic environments. Additionally, the review sheds light on the growing emphasis on connectivity and edge computing, illustrating how embedded systems leverage these technologies to facilitate seamless communication between vehicles and their surrounding infrastructure. Despite the promising advancements, the review critically examines the persistent challenges that impede the widespread adoption of embedded systems in autonomous vehicles. Issues such as safety concerns, cybersecurity threats, and regulatory frameworks are analyzed, providing insights into the complex ecosystem in which these technologies operate. In addressing the future directions of embedded systems in autonomous vehicles, the review envisions a trajectory marked by continuous innovation and collaboration across industries. It anticipates the evolution of embedded systems towards more robust, adaptive, and fault-tolerant architectures, paving the way for increased autonomy and widespread deployment of autonomous vehicles. This comprehensive review provides a holistic understanding of embedded systems in autonomous vehicles, encapsulating current trends, challenges, and future directions. As the automotive landscape undergoes a paradigm shift, this review serves as a valuable resource for researchers, practitioners, and policymakers seeking to navigate the dynamic terrain of autonomous vehicle technology.
Article
Full-text available
The present analysis examines extensive and consistent data from automotive production and service to assess reliability and predict failures in the case of an engine control device. It is based on statistical evaluation of production and lead times to determine vehicle sales. Mileages are integrated to establish the age of the vehicle fleet over time and to predict the censored data. Failure and censored times are merged in a multiple censored data and combined by the Kaplan-Meier estimator for survivals. The Weibull distribution is used as parametric reliability model and its parameters identified to assure precision in predictions (>95%). An average time to failure >80 years and a slightly increasing failure rate ensure a low risk. The study is based on real-world data from various sources, acknowledging that the data are not homogeneous, and it offers a comprehensive roadmap for processing this diverse raw data and evolving it into sophisticated predictive models. Furthermore, it provides insights from various perspectives, including those of the Original Equipment Manufacturer, Car Manufacturer, and Users.
Article
Full-text available
Autonomous vehicles (AVs) will revolutionize mobility in the future. However, accidents will still happen and it will affect the practices of today’s tort laws. This work discusses all those aspects which should be considered in order to find out who is liable, i.e., an operator, owner, manufacturer, government entity, software provider, network provider, original equipment manufacturer (OEM), etc., as traditional tort rules will not help to find out the liability in case of an AV accident. The work comprehensively discusses different liabilities ranging from legal, civil, operator, criminal, moral, product, insurance, etc., to find out who is liable in case of an AV accident, as compared to the existing literature which generally discusses one or two aspects only. The work also presents the current state of legislation and discusses legal challenges to the lawmakers, insurance companies, consumer, and manufacturers, etc. The future mobility models and different scenarios of AV accidents have also been discussed in terms of legal liability and third party insurance claims. The role of regulatory bodies and different challenges has also been discussed along with recommendations. Finally, the work also proposes a new novel liability attribute model with a particular focus on ethical issues. The research proposes that liability should be attributed in such a way that it benefits everyone and everybody feels justified in case of an AV accident. The research also concludes that product liability will be the major issue in terms of insurance issues and the manufacturer should be held liable for product failure unless other evidence favors the manufacturer.
Article
Full-text available
The rapid expansion of a country’s economy is highly dependent on timely product distribution, which is hampered by terrible traffic congestion. Additional staff are also required to follow the delivery vehicle while it transports documents or records to another destination. This study proposes Delicar, a self-driving product delivery vehicle that can drive the vehicle on the road and report the current geographical location to the authority in real-time through a map. The equipped camera module captures the road image and transfers it to the computer via socket server programming. The raspberry pi sends the camera image and waits for the steering angle value. The image is fed to the pre-trained deep learning model that predicts the steering angle regarding that situation. Then the steering angle value is passed to the raspberry pi that directs the L298 motor driver which direction the wheel should follow. Based upon this direction, L298 decides either forward or left or right or backwards movement. The 3-cell 12V LiPo battery handles the power supply to the raspberry pi and L298 motor driver. A buck converter regulates a 5V 3A power supply to the raspberry pi to be working. Nvidia CNN architecture has been followed, containing nine layers including five convolution layers and three dense layers to develop the steering angle predictive model. Geoip2 (a python library) retrieves the longitude and latitude from the equipped system’s IP address to report the live geographical position to the authorities. After that, Folium is used to depict the geographical location. Moreover, the system’s infrastructure is far too low-cost and easy to install.
Article
Full-text available
The article describes a structural and functional model of a self-driving car control system, which generates a wide class of mathematical problems. Currently, control systems for self-driving cars are considered at several levels of abstraction and implementation: Mechanics, electronics, perception, scene recognition, control, security, integration of all subsystems into a solid system. Modern research often considers particular problems to be solved for each of the levels separately. In this paper, a parameterized model of the integration of individual components into a complex control system for a self-driving car is considered. Such a model simplifies the design and development of self-driving control systems with configurable automation tools, taking into account the specifics of the solving problem. The parameterized model can be used for CAD design in the field of self-driving car development. A full cycle of development of a control system for a self-driving truck was implemented, which was rub in the “Robocross 2021” competition. The software solution was tested on more than 40 launches of a self-driving truck. Parameterization made it possible to speed up the development of the control system, expressed in man-hours, by 1.5 times compared to the experience of the authors of the article who participated in the same competition in 2018 and 2019. The proposed parameterization was used in the development of individual CAD elements described in this article. Additionally, the implementation of specific modules and functions is a field for experimental research.
Article
Full-text available
Autonomous vehicles (AVs) rely on various types of sensor technologies to perceive the environment and to make logical decisions based on the gathered information similar to humans. Under ideal operating conditions, the perception systems (sensors onboard AVs) provide enough information to enable autonomous transportation and mobility. In practice, there are still several challenges that can impede the AV sensors' operability and, in turn, degrade their performance under more realistic conditions that actually occur in the physical world. This paper specifically addresses the effects of different weather conditions (precipitation, fog, lightning, etc.) on the perception systems of AVs. In this work, the most common types of AV sensors and communication modules are included, namely: RADAR, LiDAR, ultrasonic, camera, and global navigation satellite system (GNSS). A comprehensive overview of their physical fundamentals, electromagnetic spectrum, and principle of operation is used to quantify the effects of various weather conditions on the performance of the selected AV sensors. This quantification will lead to several advantages in the simulation world by creating more realistic scenarios and by properly fusing responses from AV sensors in any object identification model used in AVs in the physical world. Moreover, it will assist in selecting the appropriate fading or attenuation models to be used in any X-in-the-loop (XIL, e.g., hardware-in-the-loop, software-in-the-loop, etc.) type of experiments to test and validate the manner AVs perceive the surrounding environment under certain conditions.
Article
Full-text available
With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Article
Full-text available
Despite their promise, there have been discussions surrounding the technological risks of autonomous vehicles (AVs) and the extent to which AVs are ready for large-scale deployment. Using a case study approach, this article examines the development and implementation of AVs in Singapore. Our findings reveal that AV regulatory sandboxes, the formalisation of safety assessments and the release of technical guidelines are some of the most adaptive and innovative instruments that have been adopted to govern AVs in Singapore. Furthermore, Singapore's approach to AVs has applied an adaptive strategy that is both pre-emptive and responsive. The accelerated expansion of trials and regulatory provisions for AVs demonstrates Singapore's aspiration to be nimble, and showcases the simultaneous adoption of two contrasting implementation approaches – prescriptive and experimentalist – to guide AV adoption. The regulatory lessons derived from the governance of AVs in Singapore could provide useful policy guidance, and could inform policy discussions of AVs as well as other autonomous systems.
Article
Full-text available
Advanced automotive active safety systems, in general, and autonomous vehicles, in particular, rely heavily on visual data to classify and localize objects, such as pedestrians, traffic signs and lights, and nearby cars, to help the corresponding vehicles maneuver safely in their environments. However, the performance of object detection methods could degrade rather significantly in challenging weather scenarios, including rainy conditions. Despite major advancements in the development of deraining approaches, the impact of rain on object detection has largely been understudied, especially in the context of autonomous driving.
Article
Autonomous driving is a rapidly developing technology that is also a source of debate. People believe that autonomous vehicles will provide a better future by increasing road safety, lowering infrastructure expenses, and improving mobility for children, the old, and the disabled. On the other hand, many individuals are concerned about incidences of automotive hacking, the likelihood of fatal crashes, and the loss of driving-related professions. Autonomous driving is, without a question, a complex and problematic technology for many people. To better comprehend how safe self-driving cars are, it’s necessary to first understand how they function, as well as what kind of sensors autonomous vehicles use to determine where they should travel and recognize things on the road in order to avoid automobile accidents. Data collected by the sensors exhibit heterogeneous and multimodal characteristics, which are further fused to frame effective decision rules. Thus sensors play a major role in decision making activity of Autonomous Vehicles (AVs). In order to acquire more information related to the sensors, this paper analyses and summarizes different types of AV sensors based on their mandatory attributes. This analysis helps the readers to understand the contribution of the sensors towards decision-making in AVs and also summarizes the data types collected by different sensors. The summarized inferences will be an eye-opener to most of the budding researchers and students in the field of AVs to select the appropriate sensor based on their needs for their research. The study also gives brief information regarding the specifications of different categories of sensors manufactured by leading vendors in the market.