SAE levels of Driving Automation.

SAE levels of Driving Automation.

Source publication
Article
Full-text available
With the emerging interest in the autonomous driving level at 4 and 5 comes a necessity to provide accurate and versatile frameworks to evaluate the algorithms used in autonomous vehicles. There is a clear gap in the field of autonomous driving simulators. It covers testing and parameter tuning of a key component of autonomous driving systems, SLAM...

Contexts in source publication

Context 1
... biggest technology companies such as Nvidia, Intel, Google and new mobility startups (Uber, Aurora and Cruise) are putting their effort into creating solutions for the autonomous car industry [1]. Autonomous driving on SAE level 4 and 5 (see Figure 1) has been classified as one of the biggest emerging technology trends in the Gartner Hype Cycle for Emerging Technologies 2019 [2]. Besides this glimpse of the future, in which we no longer need to drive our personal cars, there are much more critical areas in which reliable autonomous vehicles are vital. ...
Context 2
... level of driving automation can be classified using levels defined by SAE International (a standard developing organization specializing in the automotive industry). Using the SAE levels, we can distinguish six levels of automation, which are illustrated in Figure 1. The SAE level 4 and 5 vehicles are high and fully autonomous cars, and they need a perfect understanding of the surrounding environment in real time. ...
Context 3
... our experiments, we decided to use manual alignment. In Figure 10 we can see that the error values are lower than in the case without the alignment. The reason is that we reduced the error caused by the small shifts in real and simulated environments. ...
Context 4
... both Figures 9 and 10, we can notice that the error bars for the checkpoints 4 and 5 were not so affected by this operation. It is because these checkpoints were placed at the end of the labyrinth. ...
Context 5
... add the proper amount of noise to the data, we based this amount on the information on the measurement error given by the Velodyne. The results can be observed in Figures 11 and 12. We can observe that creating a realistic and detailed setup makes the simulated data similar to the corresponding real data. ...
Context 6
... Figure 11 we can observe the error results achieved using Track no. 1. The error results of real and simulated data obtained using this track, which was a relatively simple track-a straight line with five measuring points-are similar (both for the simulated data with and without noise). ...
Context 7
... error results of real and simulated data obtained using this track, which was a relatively simple track-a straight line with five measuring points-are similar (both for the simulated data with and without noise). In Figure 11a, we can notice that for the simulated data without noise added, it is the error on real data which increases faster, and in Figure 11b it is the opposite: the error is growing faster for the simulated data. However, it is only visible for the two last points. ...
Context 8
... error results of real and simulated data obtained using this track, which was a relatively simple track-a straight line with five measuring points-are similar (both for the simulated data with and without noise). In Figure 11a, we can notice that for the simulated data without noise added, it is the error on real data which increases faster, and in Figure 11b it is the opposite: the error is growing faster for the simulated data. However, it is only visible for the two last points. ...
Context 9
... both point clouds are suited to evaluate a SLAM algorithm. Figure 11. SLAM accuracy metric for real data and simulated data on the first track: (a) without noise and with the rolling shutter effect (b) with added noise and the rolling shutter effect, distance between the checkpoints-1 m. ...
Context 10
... Figure 12 we can observe the error results achieved using Track no. 2. In this case, the differences in the errors obtained from the simulated point clouds with and without noise in comparison to the ones obtained from real data differ significantly. Here, we examine a more complicated track. ...
Context 11
... we examine a more complicated track. In Figure 12a we can clearly see that the difference between the trends of both errors increases with time. If we wanted to use a longer track, the generated point clouds could not be accurate enough to evaluate the performance of the SLAM algorithm. ...
Context 12
... we wanted to use a longer track, the generated point clouds could not be accurate enough to evaluate the performance of the SLAM algorithm. Additionally, see Figure 13 to observe that the relationship between the accuracy metric and the distance from the beginning persists. Moreover, the difference between the errors obtained using simulated data with and without the rolling shutter effect increases significantly with time. ...
Context 13
... the difference between the errors obtained using simulated data with and without the rolling shutter effect increases significantly with time. It can be observed in Figure 14. It is not possible to match the linear trend well to the bars representing data without the rolling shutter, as the error increase is close to the exponential one. ...
Context 14
... decided to use the outdoor environment instead of the indoor one, because the simulation of this effect is more necessary for outdoor applications, in which a vehicle can reach a significant speed. It can be seen in Figure 15. When we compare the obtained effect with the one observed in a real-life setup (Figure 3), we can clearly see that the simulation can accurately mimic this behavior. ...

Similar publications

Article
Full-text available
The advancement of technology has made it possible for modern cars to utilize an increasing number of processing systems. Many methods have been developed recently to detect traffic signs using image processing algorithms. This study deals with an experiment to build a CNN model which can classify traffic signs in real-time effectively using OpenCV...

Citations

... A Point cloud image is a group of point images in which signals reflected from objects are produced in the form of three-dimensional points, and have been widely applied to implement a digital twin of buildings or objects using laser scanners [16][17][18][19][20][21]. ...
Article
Full-text available
High-performance radar systems are becoming increasingly popular for accurately detecting obstacles in front of unmanned vehicles in fog, snow, rain, night and other scenarios. The use of these systems is gradually expanding, such as indicating empty space and environment detection rather than just detecting and tracking the moving targets. In this paper, based on our high-resolution radar system, a three-dimensional point cloud image algorithm is developed and implemented. An axis translation and compensation algorithm is applied to minimize the point spreading caused by the different mounting positions and the alignment error of the Global Navigation Satellite System (GNSS) and radar. After applying the algorithm, a point cloud image for a corner reflector target and a parked vehicle is created to directly compare the improved results. A recently developed radar system is mounted on the vehicle and it collects data through actual road driving. Based on this, a three-dimensional point cloud image including an axis translation and compensation algorithm is created. As a results, not only the curbstones of the road but also street trees and walls are well represented. In addition, this point cloud image is made to overlap and align with an open source web browser (QtWeb)-based navigation map image to implement the imaging algorithm and thus determine the location of the vehicle. This application algorithm can be very useful for positioning unmanned vehicles in urban area where GNSS signals cannot be received due to a large number of buildings. Furthermore, sensor fusion, in which a three-dimensional point cloud radar image appears on the camera image, is also implemented. The position alignment of the sensors is realized through intrinsic and extrinsic parameter optimization. This high-performance radar application algorithm is expected to work well for unmanned ground or aerial vehicle route planning and avoidance maneuvers in emergencies regardless of weather conditions, as it can obtain detailed information on space and obstacles not only in the front but also around them.
... Nowadays, Simultaneous Localization and Mapping systems are considerably widespread, especially in autonomous vehicles, a fact that makes sense since autonomy relies greatly on the system's ability to localize itself in an unknown environment [1]. The to build a map of its unknown environment and locate itself, in it, simultaneously. ...
Article
Full-text available
Several works have been carried out in the realm of RGB-D SLAM development, yet they have neither been thoroughly assessed nor adapted for outdoor vehicular contexts. This paper proposes an extension of HOOFR SLAM to an enhanced IR-D modality applied to an autonomous vehicle in an outdoor environment. We address the most prevalent camera issues in outdoor contexts: environments with an image-dominant overcast sky and the presence of dynamic objects. We used a depth-based filtering method to identify outlier points based on their depth value. The method is robust against outliers and also computationally inexpensive. For faster processing, we suggest optimization of the pose estimation block by replacing the RANSAC method used for essential matrix estimation with PROSAC. We assessed the algorithm using a self-collected IR-D dataset gathered by the SATIE laboratory instrumented vehicle using a PC and an embedded architecture. We compared the measurement results to those of the most advanced algorithms by assessing translational error and average processing time. The results revealed a significant reduction in localization errors and a significant gain in processing speed compared to the state-of-the-art stereo (HOOFR SLAM) and RGB-D algorithms (Orb-slam2, Rtab-map).
... В данном направлении работают множество научно-исследовательских институтов, и ученых из разных стран. Так в [5] ученые-исследователи с помощью симулятора LIDAR создают реальные облака точек, которые позволяют настроить параметры SLAM и развернуть его в режиме реального времени. В работе [6] SLAM алгоритм был использован для получения двумерной карты и на основании его предлагается алгоритм проектирования визуализации в трехмерном формате. ...
Article
Исследование неизвестной среды является фундаментальной проблемой в области автономной мобильной робототехники, которая занимается исследованием неизвестных областей при создании карты окружающей среды. Обычно человек составляет карту окружающей среды заранее, и эта карта используется роботом для последующей навигации, избегая препятствий. Исследование на основе границ - наиболее распространенный подходом, при которой граница выступает в качестве места между открытыми и не изученными областями. Существует множество применений алгоритмов исследования в таких областях, как космическая робототехника, развертывание датчиков и защитная робототехника и т. д. Наряду с этим разработаны множество методов, основанных на границах, таких как Wavefront Frontier Detector и Fast Frontier Detection, которые уменьшают временную сложность оригинальной методики исследования, основанной на границах. Алгоритм simultaneous localization and mapping позволяет работать в неизвестной местности и обновлять имеющеюся карту и др. В данной работе описывается и реализуется автономная стратегия исследования границ, представлены результаты моделирования алгоритма simultaneous localization and mapping, в среде моделирования Gazebo, а также на аппаратной платформе TurtleBot с использованием Robot Operating System. Преимущество этого алгоритма заключается в том, что робот может исследовать большие открытые пространства, а также небольшие загроможденные пространства.
... The first proposed method consists of estimating the features of the environment using heuristic methods and shaping the map using geometric principles. The Normal [10] es utilizado para analizar la densidad de probabilidad de una nube de puntos y luego es aplicado para identificar el nivel de correlación que existe entre puntos cercanos, asumiendo que a cada medida de rango se le asocia una distribución de probabilidad Gaussiana [11]. En los dos casos, la eficiencia computacional no es suficiente para un importante número de puntos necesarios para una adecuada reconstrucción del mapa, por lo que los métodos basados en muestreo agilitan la extracción de características y complementan la construcción de primitivas en tiempo real. ...
... Dado que, los sistemas de estimación de posición del robot durante un escenario de navegación autónoma suelen exhibirse ocluidos en interiores por la inhibición de señales satelitales en estructuras cerradas o por la interferencia de los objetos en escena, es necesario optar por mecanismos auxiliares como SLAM [25]- [27]. En este caso, no trivial para minería profunda, SLAM se ha convertido en una tecnología capaz de estimar la propia posición de un robot y a su vez construir un mapa incremental de la escena en base a la lectura y fusión de datos, los mismos que son usualmente entregados por diversos tipos de sensores y sistemas de censado como unidad de medición inercial (IMU), cámaras, LiDARs [11]- [14], [17]. Por ejemplo, en [13], se utiliza un sensor IMU para alinear escaneos de un LiDAR que permiten corregir la correspondencia entre mapas y mejorar consecuentemente la precisión de la localización de un vehículo robótico con SLAM. ...
Article
Full-text available
This paper proposes several techniques for simultaneous localization and mapping of the environment, which allow mobile robots to self-reference themselves in a navigation environment with reduced accessibility by external means, such as the Global Position System (GPS). The methodology consists of implementing four unsupervised machine learning algorithms, using data sets generated based on a cloud of range points delivered by LiDAR sensor measurements. The proposed approach identifies characteristics from a navigability map, whereas an additional method based on Extended Kalman Filter (EKF) allows to find the robot positioning in conjunction. The proposed approach identifies navigability map features and an additional method based on EKF. EKF allows finding the robot positioning that is conjugated with each of the proposed algorithms. The first proposed method consists of estimating the features of the environment using heuristic methods and shaping the map using geometric principles. The second method is based on K-Means to incorporate the uncertainty in the sensor measurement, while the third solution uses the Gaussian Mixture Model (GMM). The fourth method focuses on Density-Based Spatial Clustering (DBSCAN). An odometry error is induced in the robot to include uncertainty within the test environment, which propagates into the positioning readings. The results show that DBSCAN presents a better execution time for the proposed localization system than the other comparative methods. Additionally, the robot's localization is more accurate with this method, showing a 5% reduction of the error compared to the result obtained from the other proposed algorithms. Finally, with the results achieved, it is expected that the consumption of robot resources can be reduced with the reduction of localization error and automatic mapping.
... Extracting structural information as shapes or surfaces from an unordered set of 3D coordinates (point cloud) has been an important topic in computer vision [1]. It is a crucial part of many applications such as autonomous driving [2], scene understanding [3], reverse engineering of geometric models [4], quality control [5], simultaneous localization and mapping (SLAM) [6] and matching point clouds to CAD models [7]. Over the last decade, hardware developments have made the acquisition of those point clouds more affordable. ...
... The surface normal elements of a regular C 1 surface in R 3 are contained in a linear line element complex with coordinates (c,c, γ) if and only if the surface is part of an equiform kinematic surface. In that case, the uniform equiform motion has the velocity vector field as given in Equation (6). ...
Article
Full-text available
The close relation between spatial kinematics and line geometry has been proven to be fruitful in surface detection and reconstruction. However, methods based on this approach are limited to simple geometric shapes that can be formulated as a linear subspace of line or line element space. The core of this approach is a principal component formulation to find a best-fit approximant to a possibly noisy or impartial surface given as an unordered set of points or point cloud. We expand on this by introducing the Gaussian process latent variable model, a probabilistic non-linear non-parametric dimensionality reduction approach following the Bayesian paradigm. This allows us to find structure in a lower dimensional latent space for the surfaces of interest. We show how this can be applied in surface approximation and unsupervised segmentation to the surfaces mentioned above and demonstrate its benefits on surfaces that deviate from these. Experiments are conducted on synthetic and real-world objects.
... Although, Gazebo can undoubtedly be used for preliminary tests of robot localization and navigation, it cannot be used for comprehensive evaluation of SLAM algorithm performance. The discussion regarding the usage of different simulation platforms for the purpose of SLAM algorithms performance evaluation can be found in 12 . These platforms (CARLA 34 , AirSim 35 and LiDARsim 36 ) focus mainly on realistic scene generation and data labeling for the purpose of object recognition Content courtesy of Springer Nature, terms of use apply. ...
... Not taking sensor-specific phenomena and errors into account may (very likely) result in the inability to perform an objective evaluation in the simulation. Therefore, in our simulation we put great emphasis on the correct simulation of data generated by 2D and 3D LiDARs (our LiDAR simulation is described in detail in the article 12 , in which we describe the effects present in real lidar data, for example rolling shutter effect, and comprehensively evaluate the accuracy of this simulation), IMU and wheel encoders. ...
... In our case, we have used a simulated environment introduced and verified in 12 to compare different hardware configurations of Google Cartographer SLAM algorithm to be deployed in an actual robot used for decontamination (thus the possible placement of different sensors was limited). We examined eight different hardware configurations based on 3 LiDARs 2D, one LiDAR 3D, IMU and wheel odometry in two experimental environments -a laboratory room and a hallway. ...
Article
Full-text available
    One of the most challenging topics in robotics is simultaneous localization and mapping (SLAM) in the indoor environments. Due to the fact that Global Navigation Satellite Systems cannot be successfully used in such environments, different data sources are used for this purpose, among others light detection and ranging (LiDARs ), which have advanced from numerous other technologies. Other embedded sensors can be used along with LiDARs to improve SLAM accuracy, e.g. the ones available in the Inertial Measurement Units and wheel odometry sensors. Evaluation of different SLAM algorithms and possible hardware configurations in real environments is time consuming and expensive. In our study, we evaluate the accuracy of mapping and localization (based on Absolute Trajectory Error and Relative Pose Error). Our use case is a robot used for room decontamination. The results for a small room show that for our robot the best hardware configuration consists of three LiDARs 2D, IMU and wheel odometry sensors. On the other hand, for long hallways, a configuration with one LiDAR 3D sensor and IMU works better and more stable. We also described a general approach together with tools and procedures that can be used to find the best sensor setup in simulation.
    ... In contrast to other works, which focus mainly on visual perception algorithms, our simulator focuses mainly on tasks such as localization, mapping and path determination. In our previous works, we verified the accuracy of our LiDAR data generation procedure [8]. ...
    ... Their simulation requires an efficient mechanism determining the distance to the nearest objects. In one of our previous articles, [8], we described in detail the simulation procedure of LiDAR data. The basic LiDAR parameter is the number of channels (vertical resolution). ...
    ... Additionally, the rolling shutter effect simulation should be included. In the article [8], we described the impact of this effect on positioning/localization accuracy and error propagation over time. In Figure 3, we present an example scan of the real and simulated laboratory room using Velodyne VLP-16 and its simulation (in [8], we have included a table with parameters of real and simulated Velodyne VLP-16 LiDARs). ...
    Article
    Full-text available
      Perception and vehicle control remain major challenges in the autonomous driving domain. To find a proper system configuration, thorough testing is needed. Recent advances in graphics and physics simulation allow researchers to build highly realistic simulations that can be used for testing in safety-critical domains and inaccessible environments. Despite the high complexity of urban environments, it is the non-urban areas that are more challenging. Nevertheless, the existing simulators focus mainly on urban driving. Therefore, in this work, we describe our approach to building a flexible real-time testing platform for unmanned ground vehicles for indoor and off-road environments. Our platform consists of our original simulator, robotic operating system (ROS), and a bridge between them. To enable compatibility and real-time communication with ROS, we generate data interchangeable with real-life readings and propose our original communication solution, UDP Bridge, that enables up to 9.5 times faster communication than the existing solution, ROS#. As a result, all of the autonomy algorithms can be run in real-time directly in ROS, which is how we obtained our experimental results. We provide detailed descriptions of the components used to build our integrated platform.
      ... Lidar sistemlerinin taşınabilir hale getirilmesi ile ilgili ilk çalışma ise Glennie (2009) tarafından helikoptere monte edilmiş lazer tarama sistemi ile karşımıza çıkmaktadır. Sonrasında SLAM (Simultaneous Localization and Mapping) algoritmalarındaki gelişmeler ile birlikte taşınabilir lazer tarama sistemleri mobil haritalama sistemlerine doğru dönüşmeye başlamış ve lidar sistemlerinin hareket halindeki bir araca, insansız hava aracına ya da sırt çantasına monte edilmesi ile, hareket halinde, kısa süre içinde ve yüksek çözünürlükte ortamının 3 boyutlu haritaları üretilmeye başlanmıştır (Mossmann and Stiller 2011, Nüchter et al. 2007, Sobczak et al. 2021. ...
      Article
      Teknolojideki gelişmelere paralel olarak mobil lidar sistemlerinin kullanım alanları günümüzde hızla artmaktadır. Özellikle GNSS ile konum belirlemenin mümkün olmadığı kapalı alanlarda SLAM algoritmalarının sağladığı avantajlar ile haritalama çalışmaları yüksek doğrulukta hızda yapılabilmektedir. Bu çalışmada, geliştirilen bir mobil lidar sistemi ile ağaçlık alan, kapalı alan ve dış mekanda yapılan ölçmeler sonucunda söz konusu alanların üç boyutlu modelleri üretilmiş ve üretilen modellerin doğruluk analizi yapılarak, GNSS ile konumlamanın mümkün olmadığı durumlarda mobil lidar sistemlerinin doğrulukları araştırılmıştır. Yapılan testler sonucunda geliştirilen mobil lidar sistemi ile ağaçlık alanlar, kapalı alanlar ve dış mekanlarda yapılan çalışmalar için sırasıyla ±2.1 cm, ±2.4 cm ve ±3.0 cm standart sapma değerleri elde edilmiştir. Bu sonuçlara göre sistemin orman envanterinin belirlenmesi çalışmalarında, kapalı ve açık alanlarda yapılacak mimari rölöve vb çalışmalarda kullanılabileceği öngörülmektedir.
      ... LiDAR simulation aims to produce a realistic LiDAR point cloud by mimicking the physical process of its imaging [3,14,23,30,44,47,57,70,75,75]. Physical-based LiDAR simulation uses raycasting methods to simulate LiDAR. ...
      Preprint
      Full-text available
      We present LiDARGen, a novel, effective, and controllable generative model that produces realistic LiDAR point cloud sensory readings. Our method leverages the powerful score-matching energy-based model and formulates the point cloud generation process as a stochastic denoising process in the equirectangular view. This model allows us to sample diverse and high-quality point cloud samples with guaranteed physical feasibility and controllability. We validate the effectiveness of our method on the challenging KITTI-360 and NuScenes datasets. The quantitative and qualitative results show that our approach produces more realistic samples than other generative models. Furthermore, LiDARGen can sample point clouds conditioned on inputs without retraining. We demonstrate that our proposed generative model could be directly used to densify LiDAR point clouds. Our code is available at: https://www.zyrianov.org/lidargen/
      ... The last one, also called an occupancy map, is the most common way for robots to describe the model of the environment. It divides the workspace into a series of grids, where each cell in the grid corresponds to binary random variables that describe the occupancy probability [8,20,[40][41][42]. Hence, the map m is given by a product occupancy probability of each cell mi, which is associated with a certain point in three-dimensional Cartesian space [23,34]: ...
      ... There are many different SLAM techniques based on occupancy grids for map building in 2D or 3D space [10,22]. One of possible representations of the 3D map as an occupancy grid is octrees, which instead of the 2D quadtrees representation presented directly in references [8,42], uses an octree hierarchical data structure, where each volume-named node has eight child connections with inner nodes [25]. Currently, in the scientific literature, a rich number of occupancy grids-based SLAM techniques can be found [18,20,26,43]. ...
      ... The map-building approach applied in this paper has its basis in a robot's pose estimation, which is presented in references [42,49], called the AMCL-EKF, which connects two separate paradigms-Kalman and particle filters. The pose is estimated by the extended Kalman filter by using measurements data from LIDAR and IMU sensors and adaptive Monte-Carlo localization (AMCL). ...
      Article
      Full-text available
      Simultaneous localization and mapping (SLAM) is a dual process responsible for the ability of a robotic vehicle to build a map of its surroundings and estimate its position on that map. This paper presents the novel concept of creating a 3D map based on the adaptive Monte-Carlo location (AMCL) and the extended Kalman filter (EKF). This approach is intended for inspection or rescue operations in a closed or isolated area where there is a risk to humans. The proposed solution uses particle filters together with data from on-board sensors to estimate the local position of the robot. Its global position is determined through the Rao–Blackwellized technique. The developed system was implemented on a wheeled mobile robot equipped with a sensing system consisting of a laser scanner (LIDAR) and an inertial measurement unit (IMU), and was tested in the real conditions of an underground mine. One of the contributions of this work is to propose a low-complexity and low-cost solution to real-time 3D-map creation. The conducted experimental trials confirmed that the performance of the three-dimensional mapping was characterized by high accuracy and usefulness for recognition and inspection tasks in an unknown industrial environment.