Figure 4 - uploaded by Suleiman Alsweiss
Content may be subject to copyright.
Autonomous vehicles sensor ecosystem (Image source: The Economist).

Autonomous vehicles sensor ecosystem (Image source: The Economist).

Source publication
Article
Full-text available
Autonomous vehicles (AVs) rely on various types of sensor technologies to perceive the environment and to make logical decisions based on the gathered information similar to humans. Under ideal operating conditions, the perception systems (sensors onboard AVs) provide enough information to enable autonomous transportation and mobility. In practice,...

Context in source publication

Context 1
... 3 depicts the electromagnetic spectrum and the spectral ranges used by AV sensors investigated in this work. Furthermore, Figure 4 shows a top-level description of the AV sensors suite with a brief description. ...

Similar publications

Article
Full-text available
Autonomous driving can make traffic safer by reducing human errors. Different sensor types in autonomous vehicles could introduce additional technical failures. We offer a target simulator testing LiDAR systems under automotive conditions. Therefore, data are projected over-the-air by laser signals on the LiDAR detector. This work presents a concep...

Citations

... In contrast, human drivers may only be able to perceive objects up to approximately 10 meters away under similar circumstances 35 . Although adverse weather can increase the likelihood of potential failures or loss of sensors [36][37][38] , recent innovations in visual algorithms, coupled with the combined use of cameras, LIDAR, GNSS, and RADAR sensors 39,40 , are crafted to recognize pedestrians and vehicles under varying weather scenarios, such as cloudiness, snow, rain, and darkness 41,42 . This offers solutions to the challenges associated with driving in less-than-ideal conditions. ...
... Interestingly, the dawn/dusk odds ratio indicates a 5.250 higher probability of ADS accident than HDV accident. This could be attributed to the sensors and cameras used by AVs may not be able to quickly adapt to changes in lighting conditions, which could affect their ability to detect obstacles, pedestrians, and other vehicles 39,43 . At dawn and dusk, for instance, the sun's shadows and reflections may confuse sensors, making it hard for them to distinguish between objects and identify potential hazards. ...
Article
Full-text available
Despite the recent advancements that Autonomous Vehicles have shown in their potential to improve safety and operation, considering differences between Autonomous Vehicles and Human-Driven Vehicles in accidents remain unidentified due to the scarcity of real-world Autonomous Vehicles accident data. We investigated the difference in accident occurrence between Autonomous Vehicles’ levels and Human-Driven Vehicles by utilizing 2100 Advanced Driving Systems and Advanced Driver Assistance Systems and 35,113 Human-Driven Vehicles accident data. A matched case-control design was conducted to investigate the differential characteristics involving Autonomous’ versus Human-Driven Vehicles’ accidents. The analysis suggests that accidents of vehicles equipped with Advanced Driving Systems generally have a lower chance of occurring than Human-Driven Vehicles in most of the similar accident scenarios. However, accidents involving Advanced Driving Systems occur more frequently than Human-Driven Vehicle accidents under dawn/dusk or turning conditions, which is 5.25 and 1.98 times higher, respectively. Our research reveals the accident risk disparities between Autonomous Vehicles and Human-Driven Vehicles, informing future development in Autonomous technology and safety enhancements.
... These vehicles also have advanced computation and control systems for interpreting the data to make the driving decisions, such as steering, acceleration, and braking, all without human intervention. The primary use of these data coming from various IoT devices of the vehicle include the following: (i) perception and action-receiving information, planning, and responding based on the collected data, (ii) precise mapping of the surroundings, (iii) identifying speed, range (in EVs), and distance using cameras and Li-DAR (or radars), and (iv) communication with other vehicles and sharing information [5][6][7][8]. ...
... These vehicles also have advanced computation and control systems for interpreting the data to make the driving decisions, such as steering, acceleration, and braking, all without human intervention. The primary use of these data coming from various IoT devices of the vehicle include the following: (i) perception and action-receiving information, planning, and responding based on the collected data, (ii) precise mapping of the surroundings, (iii) identifying speed, range (in EVs), and distance using cameras and LiDAR (or radars), and (iv) communication with other vehicles and sharing information [5][6][7][8]. ...
... As the driver's involvement in tasks like driving, steering, and braking decreases, especially in Level 3 and above, ADAS becomes more complicated and expensive. Figure 3 shows the typical location of the various sensors [8]. ...
Article
Full-text available
The deployment of intelligent transportation is still in its early stages and there are many challenges that need to be addressed before it can be widely adopted. Autonomous vehicles are a class of intelligent transportation that is rapidly developing, and they are being deployed in selected cities. A combination of advanced sensors, machine learning algorithms, and artificial intelligence are being used in these vehicles to perceive their environment, navigate, and make the right decisions. These vehicles leverage extensive data sourced from various sensors and computers integrated into the vehicle. Hence, massive computational power is required to process the information from various built-in sensors in milliseconds to make the right decision. The power required by the sensors and the use of additional computational power increases the energy consumption, and, hence, could reduce the range of the autonomous electric vehicle relative to a standard electric car and lead to additional emissions. A number of review papers have highlighted the environmental benefits of autonomous vehicles, focusing on aspects like optimized driving, improved route selection, fewer stops, and platooning. However, these reviews often overlook the significant energy demands of the hardware systems—such as sensors, computers, and cameras—necessary for full autonomy, which can decrease the driving range of electric autonomous vehicles. Additionally, previous studies have not thoroughly examined the data processing requirements in these vehicles. This paper provides a more detailed review of the volume of data and energy usage by various sensors and computers integral to autonomous features in electric vehicles. It also discusses the effects of these factors on vehicle range and emissions. Furthermore, the paper explores advanced technologies currently being developed by various industries to enhance processing speeds and reduce energy consumption in autonomous vehicles.
... In particular, LiDAR sensors have received the most attention owing to their high resolution, three-dimensional-mapping capability, and positional accuracy [2]. They operate through a time-of-flight technique to detect surrounding environments using a reflected near-infrared (NIR) pulsed laser in the range of 780-1550 nm [3]. However, the ability to recognize black and dark-colored objects is not appreciable as these colors often strongly absorb light at the wavelengths [4]. ...
Article
Full-text available
Novel LiDAR-detectable black double-shell hollow nanoparticles (BDS-HNPs) with internal white shell are successfully utilized as materials for autonomous vehicle paint for the first time. These BDS-HNPs are carefully designed to achieve excellent near-infrared (NIR) reflectance, blackness, hydrophilicity, and applicability as monolayer coatings. An emphasis is placed on the NIR reflectance by forming double-shell hollow morphologies embracing the internal white shell and multiple interfaces within the nanoparticles. Accordingly, the BDS-HNPs exhibit NIR reflectance of ca. 33.2, 36.9, and 40.9 R% at wavelengths of 793, 850, and 905 nm, respectively, comparable to NIR reflectance of the commercially available NIR-reflective bilayer dark-tone coating. For the practical LiDAR visualization, BDS-HNPs mixed with hydrophilic varnish are spray-coated onto the various objects. As a result, the BDS-HNPs-painted objects are clearly recognized by three different types of LiDAR sensors (robot, rotating, and MEMs mirror) under various conditions of inside and outside. These results clearly demonstrate the great potential of BDS-HNPs as a new type of LiDAR-detectable black material for future autonomous driving environments.
... It has other parts inside. The sensors find out how much of the particle flow there is, how fast the filter is moving, how fast the drive is moving, and the particles themselves (Vargas et al., 2021). As the grain stream hits the sensitive plates of the combine's Clean Grain Elevator, the force is measured to keep track of the yield. ...
Chapter
Precision farming, often known as digital agriculture, minimizes production costs by identifying both temporal and geographical changes within fields to achieve the highest levels of productivity and profitability , sustainability, and land resource conservation. The need to alter agricultural management techniques to sustainably safeguard natural resources including water, air, and soil quality while retaining economic advantages is being driven by the general public's growing environmental consciousness. It involves applying inputs (such insecticides and fertilizers) in the proper amounts, at the appropriate times, and locations. "Site-specific administration" is the term used to describe this kind of management. With almost a third of the world's food requiring irrigation for production, the productivity of the global food supply has increased in recent decades, mostly due to the expansion of irrigation systems. Global market rivalry for agricultural commodities, in general, puts traditional agricultural systems' economic sustainability in jeopardy and necessitates the creation of innovative, flexible production systems. With a focus on the methodical operationalization process, the goal of this chapter is to suggest a technology driven farming. The chapter intends to highlight how precision agriculture (PA) can facilitate farmers to use new technologies to make their farms more productive without hurting the quality of their crops or land. The chapter discusses the current status and opportunities for digitalizing agriculture sector.
... In contrast, Feng et al. [84] summarize methodologies for deep multi-modal object detection and data fusion, presenting main datasets released between 2013 and 2019. Similarly, Micko et al. [186] investigate sensors for monitoring tasks in road transportation infrastructure, and Vargas et al. [274] review sensors for AVs, considering their vulnerability to weather conditions. This paper provides a comprehensive review of recent studies related to VRUs, addressing critical gaps identied in previous works. ...
... These sensors capture crucial data at multiple stages of vehicle-VRU interaction, encompassing object detection, classication, intention prediction, and trajectory prediction [225]. Other devices like GPS, IMUs (inertial measurement units), odometers, inertial navigation systems (INS), and communication technologies (DSRC, Wi-Fi, RFID) provide critical data on vehicle positioning and dynamics, as well as the proximity of objects [186,274]. However, this study Manuscript submitted to ACM ...
Preprint
Full-text available
Traffic incidents involving vulnerable road users (VRUs) constitute a significant proportion of global road accidents. Advances in traffic communication ecosystems, coupled with sophisticated signal processing and machine learning techniques, have facilitated the utilization of data from diverse sensors. Despite these advancements and the availability of extensive datasets, substantial progress is required to mitigate traffic casualties. This paper provides a comprehensive survey of state-of-the-art technologies and methodologies to enhance the safety of VRUs. The study delves into the communication networks between vehicles and VRUs, emphasizing the integration of advanced sensors and the availability of relevant datasets. It explores preprocessing techniques and data fusion methods to enhance sensor data quality. Furthermore, our study assesses critical simulation environments essential for developing and testing VRU safety systems. Our research also highlights recent advances in VRU detection and classification algorithms, addressing challenges such as variable environmental conditions. Additionally, we cover cutting-edge research in predicting VRU intentions and behaviors, which is crucial for proactive collision avoidance strategies. Through this survey, we aim to provide a comprehensive understanding of the current landscape of VRU safety technologies, identifying areas of progress and areas needing further research and development.
... Hemmati & Rahmani, 2022). Weather conditions are also changeable and uncertain though there is a lot of work done in overcoming these ill effects of weather on the performance of AVs still there is a huge gap in achieving an ideal performance of AV in all weather conditions (Vargas et al., 2021). Traffic conditions are unpredictable because of the presence of driverless vehicles, vehicles with driver assistance, a lot of pedestrians cycling, walking, etc. (H. ...
Article
Full-text available
Intelligent Transportation Systems (ITS) are gaining momentum due to the advantages it possesses in congestion-free traffic, reduced probability of accidents, and economical transit thus reducing the transit time, saving human life, and helping the growth of the economy. Autonomous vehicles (AV) are an important part of ITS as these are the actual actuators of the ITS. In AVs, the perception system is particularly important as this provides vital information to the motion and planning system. Any error in perception of the environment by the AV will lead to the failure of the entire ITS. This article provides a thorough review of the recent technological advancements in perception systems. In this work, we have considered peer-reviewed journals and conference proceedings on AV from 2019 onwards. The primary focus is on motion prediction models, object detection, localization, sensor data fusion, sensors, autonomous driving (AD), communication technology and AI. This article also discusses the various challenges faced by perception systems in AD and how communication technology and Artificial Intelligence (AI) can help the perception system to overcome the existing challenges.
... Cutting-edge sensor technologies are currently advancing at a quick pace to develope transportation to make it safer for riders and pedestrians. This has prompted more academics and engineers from a wide range of subjects and backgrounds to participate in the process and handle all of the related difficulties [4]. Figure 1 SAE automation levels [4] As indicated in Fig. 2, autonomous vehicle system is classified into four primary groups. ...
... This has prompted more academics and engineers from a wide range of subjects and backgrounds to participate in the process and handle all of the related difficulties [4]. Figure 1 SAE automation levels [4] As indicated in Fig. 2, autonomous vehicle system is classified into four primary groups. Many various sensors put on the vehicle are used to sense the world. ...
... LiDAR stands for Light Detection and Ranging, a technique was developed during the 1970s for use on space and aerial platforms. LiDAR systems work similarly to RADARs in that they measure the amount of time that a light pulse takes in the infrared regions released by a laser diode and to be collected by the receiver in the Lidar [4]. Figure 10 LiDAR system block diagram [4] Lidar measures the space between the object and the sensor using an infrared laser beam. ...
Article
Full-text available
This paper focuses on autonomous vehicles sensors. Sensors such as camera, radar, and lidar are the main components when it comes to designing autonomous vehicles. These sensors are important for the autonomous vehicles where it allow the system to measure the surrounding environment and to verify that what a car is detecting is accurate for further analysis to plan, control, and make decisions. This article will discuss three critical sensors that are a crucial part of the overall autonomous system which are Radar, Lidar, and Camera vision.
... While traditional datasets cover normal driving situations, our focus is on "corner-case" scenarios -uncommon and challenging situations that rarely occur but pose significant challenges or anomalies to the perception system of an autonomous vehicle [1]. These scenarios involve various factors, including environmental factors such as weather or light conditions [28], object variability with unusual characteristics like pedestrians in disguises or partially obstructed objects [27,26], unpredictable situations requiring effective perception system responses [22], scenarios exploiting sensor limitations such as poor visibility or noise [25], and failure modes related to sensor failures, system malfunctions, or degraded performance [6]. ...
Conference Paper
Autonomous driving development requires rigorous testing in real-world scenarios, including adverse weather, unpredictable events, object variations, and sensor limitations. However, these challenging "corner cases" are elusive in conventional datasets due to their unpredictability, high costs, and inherent risks. Recognizing the critical role of ground truth data in autonomous driving , the demand for synthetic data becomes evident. Contemporary machine learning-based algorithms essential to autonomous vehicles heavily depend on labeled data for training and validation. Simulation of scenarios not only mitigates the scarcity of real-world data but also facilitates controlled experimentation in situations that are challenging to replicate physically. The challenge extends beyond data scarcity, encompassing the impediment posed by the inability to systematically control and manipulate specific scenarios, hindering progress. To overcome these challenges, we present CornerSim, a dynamic virtualization framework simplifying the creation and modification of diverse driving scenarios. Leveraging simulation, CornerSim generates synthetic environments for comprehensive testing, providing essential outputs like raw sensor data (cameras, LiDAR, etc.) and labeled data (object detection bounding boxes, classes, semantic segmentation). The unpredictable nature of real-world corner cases complicates obtaining a sufficiently large and diverse annotated dataset. CornerSim addresses this challenge by not only generating synthetic data but also supplying necessary ground truth for training and evaluating machine learning models. This paper emphasizes the introduction of CornerSim and its ability to challenges related to testing autonomous vehicles in realistic scenarios. It focuses on the framework's capabilities, design principles, and integration, with the goal of enhancing thorough testing and validation of autonomous driving systems in a simulated environment, improving their robustness and safety. Our approach involves running simulations to generate datasets, which are statistically studied and compared with real data. Furthermore, we apply state-of-the-art detection algorithms to assess if data generated by CornerSim is suitable for both training and validation stages.
... There is some disagreement among researchers about how to categorise the core competencies of different subsystems when defining the functional perspective of AVs. However, in general, AV systems are made up of three to five primary functions: perception, localisation, planning, control and navigation, and system management [9], [10], as illustrated in Figure 2. ...
Preprint
Autonomous systems are becoming increasingly prevalent in new vehicles. Due to their environmental friendliness and their remarkable capability to significantly enhance road safety, these vehicles have gained widespread recognition and acceptance in recent years. Automated Driving Systems (ADS) are intricate systems that incorporate a multitude of sensors and actuators to interact with the environment autonomously, pervasively, and interactively. Consequently, numerous studies are currently underway to keep abreast of these rapid developments. This paper aims to provide a comprehensive overview of recent advancements in ADS technologies. It provides in-depth insights into the detailed information about how data and information flow in the distributed system, including autonomous vehicles and other various supporting services and entities. Data validation and system requirements are emphasised, such as security, privacy, scalability, and data ownership, in accordance with regulatory standards. Finally, several current research directions in the AVs field will be discussed.
... Sensors such as cameras [9], radio detection and ranging (radar) and light detection and ranging (LiDAR) work with technologies, such as global positioning system (GPS) and high-definition (HD) maps [10], to improve the overall accuracy and soundness of the AV perception subsystem. These sensors have been proven to be effective for use in selfdriving systems [11] but come at their own cost, such as not performing well in adverse weather conditions [12] and high computational load, which can lead decreased energy efficiency due to the large amounts of data these sensors gather. This data often includes redundant critical information, needed for safety in case of sensor failure or Operational Design Domain (ODD) limitations. ...
Conference Paper
Full-text available
div class="section abstract"> Traditional autonomous vehicle perception subsystems that use onboard sensors have the drawbacks of high computational load and data duplication. Infrastructure-based sensors, which can provide high quality information without the computational burden and data duplication, are an alternative to traditional autonomous vehicle perception subsystems. However, these technologies are still in the early stages of development and have not been extensively evaluated for lane detection system performance. Therefore, there is a lack of quantitative data on their performance relative to traditional perception methods, especially during hazardous scenarios, such as lane line occlusion, sensor failure, and environmental obstructions. We address this need by evaluating the influence of hazards on the resilience of three different lane detection methods in simulation: (1) traditional camera detection using a U-Net algorithm, (2) radar detections using infrastructure-based radar retro-reflectors (RRs), and (3) direct communication of lane line information using chip-enabled raised pavement markers (CERPMs). The performance of each of these methods is assessed using resilience engineering metrics by simulating the individual methods for each sensor technology’s response to related hazards in the CARLA simulator. Using simulation techniques to replicate these methods and hazards acquires extensive datasets without lengthy time investments. Specifically, the resilience triangle was used to quantitatively measure the resilience of the lane detection system to obtain unique insights into each of the three lane detection methods; notably the infrastructure-based CERPMs and RRs had high resistance to hazards and were not as easily affected as the vision-based U-Net. However, while U-Net was able to recover the fastest from the disruption as compared to the other two methods, it also had the most performance loss. Overall, this study demonstrates that while infrastructure-based lane keeping technologies are still in early development, they have great potential as alternatives to traditional ones. </div