Fig 5 - uploaded by Malcolm Mielle
Content may be subject to copyright.
Trajectories of the robot using both sensor modalities and NDT-OM fuser for the third run. A small rotation error at the beginning shifted the MPR's trajectory. However, the rest of the trajectory is correct as shown by the low displacement in position of the MPR at each point.

Trajectories of the robot using both sensor modalities and NDT-OM fuser for the third run. A small rotation error at the beginning shifted the MPR's trajectory. However, the rest of the trajectory is correct as shown by the low displacement in position of the MPR at each point.

Contexts in source publication

Context 1
... Gmapping usually estimates the trajectory slightly closer to the ground-truth than NDT-OM fuser, but has larger displacements d e and d o , the displacement is mainly caused by errors and corrections along the trajectory. On the other hand, NDT-OM fuser trajectories using the MPR are farther away from the ground truth than Gmapping, as visible in Fig. 3, Fig. 5, and Fig. 6. Since the mean displacement in position d e of NDT-OM fuser is lower than Gmapping's, we can deduce that the differences between its trajectories and the ground truth's trajectories are due to small rotation offsets over the trajectories. An example of the effect of such errors in rotation is visible as a slight bending of the top ...
Context 2
... specifically, Gmapping estimates trajectories closer to the ground truth than NDT-OM fuser. As seen in the four first rows of Table II, Gmapping error between the final pose of the robot and the equivalent ground truth pose is lower than NDT-OM fuser, and, as visible in Fig. 5 and Fig. 6, the trajectories are closer to the ground truth. On the other hand, NDT-OM fuser's displacement in position is slightly lower than that of Gmapping with d e = 0.028±0.004 against 0.031 ± 0.005. One reason Gmapping estimates trajectories closer to the ground truth could be found in the mapping process: compared to NDT maps, occupancy ...

Citations

... These sensors analyze the reflected beams from a target object to measure its distance [7]. LIDAR sensors are more accurate than RADAR sensors in SLAM(Simultanous Localization and Mapping) algorithms and use cases [8]. RADAR and ultrasonic sensors have some notable drawbacks as well. ...
Article
Full-text available
Distance and size estimation of objects of interests is an inevitable task for many navigation and obstacle avoidance algorithms mainly used in autonomus and robotic systems. Stereo vision systems, inspired by human visual perception, can infer depth from images as a cheap and accessible solution. On one hand, accurately calibrating cameras is a challenging task and the main source of error in current stereo vision based distance and size estimation algorithms. On the other hand, considering the recent advancements in Deep Learning, alongside the fact that human eyes do not need calibration but human brain can estimate the distance and size of objects fairly accurate was the main motivation behind this study. The proposed algorithm uses YOLOv8 as the object detector, and an MLP to learn the relation between distance, size, and disparity from collected data in a stereo vision system. In our experiments, conducted at distances ranging from 50 to 200 centimeters with calibrated and uncalibrated cameras, our proposed algorithm showcased accurate performance in both scenarios. It achieved distance measurements with an accuracy of up to 99.99% in select cases and maintained the mean accuracy of 98.15% for distance, 92.87% for width, and 93.92% for height estimations.
... An autonomous driving assistance system provides the necessary safety features to the vehicle, which can reduce fatal accidents. Perception, localization & mapping, planning, and control constitute the four basic building blocks of autonomous vehicles, as shown in figure 1. Perception is the process by which multiple sensors such as LiDAR, cameras, and radars are used to perceive the environment by acquiring image and point cloud data [1]. Maps are created using the data obtained from the sensor. ...
Article
Full-text available
Autonomous Navigation has become a topic of immense interest in robotics in recent years. Light Detection and Ranging (LiDAR) can perceive the environment in three-dimensional (3D) by creating point cloud data that can be used in constructing a 3D or high-definition (HD) Map. Localization can be performed on the 3D map created using a LiDAR sensor in real-time by matching the current point cloud data on the pre-built map, which is useful in GPS-denied areas. GPS data is inaccurate in indoor or obstructed environments, and achieving centimeter-level accuracy requires a costly Real-Time Kinematic (RTK) connection in GPS. However, LiDAR produces bulky data with hundreds of thousands of points in a frame, making it computationally expensive to process. The localization algorithm must be very fast to ensure the smooth driving of autonomous vehicles. To make the localization faster, The point cloud is downsampled and filtered before matching, and subsequently, Newton optimization is applied using normal distribution transform to accelerate the convergence of point cloud data on the map, achieving localization at 6ms per frame, which is 16 times less than the data acquisition rate of LiDAR at 10Hz (100ms per frame). The performance of optimized localization is also evaluated on the KITTI odometry benchmark dataset. With the same localization accuracy, the localization process is made five times faster. LiDAR map-based autonomous driving on an electric vehicle is tested in the TiHAN testbed at the IIT Hyderabad campus in real-time. The complete system runs on the Robot Operating System (ROS). The code will be released at https://github.com/abhishekt711/Localization-Nav.
... Radar sensors have proven useful for mobile robotics, as they are particularly robust to bad weather conditions or polluted air [1], [2]. They can penetrate certain types of materials that lidar sensors cannot, which makes them attractive to use for indoor robotics as well. ...
Preprint
Full-text available
RadaRays allows for the accurate modeling and simulation of rotating FMCW radar sensors in complex environments, including the simulation of reflection, refraction, and scattering of radar waves. Our software is able to handle large numbers of objects and materials, making it suitable for use in a variety of mobile robotics applications. We demonstrate the effectiveness of RadaRays through a series of experiments and show that it can more accurately reproduce the behavior of FMCW radar sensors in a variety of environments, compared to the ray casting-based lidar-like simulations that are commonly used in simulators for autonomous driving such as CARLA. Our experiments additionally serve as valuable reference point for researchers to evaluate their own radar simulations. By using RadaRays, developers can significantly reduce the time and cost associated with prototyping and testing FMCW radar-based algorithms. We also provide a Gazebo plugin that makes our work accessible to the mobile robotics community.
... MODERN high-level systems for autonomous driving are based on highly precise environmental recognition as well as accurate speed and position information, which can be estimated very robustly with lidar sensors, cameras or radar sensors, respectively [1], [2]. The focus of research in this area has changed only in recent years from the use of a single sensor system to the use of multiple sensors working cooperatively, especially in the field of radar sensors [3], [4]. ...
... This transfer only works if at least N =2 radar sensors are used, otherwise the system of equations is underdetermined. According to (1), the robustness of the partitioning of sensor speeds into vehicle speeds is proportional to the distance between sensors. Since v y =0 holds for the vehicle velocity in y-direction, only the difference of the y n -component is significant. ...
... Similar y components lead to a broad minimum of the function g according to Fig. 7c, which leads to less accurate absolute orientation estimates. As soon as the sensor S 2 is positioned near the rotation point [0, 0] of the vehicle, the orientation of both sensors can be estimated much better, since the yaw rate with respect to the sensor S 2 is decoupled according to (1). Furthermore, it can be seen that the estimation yields the best result as soon as both sensors are oriented in different directions with an orientation difference of φ 2 = ± 90 • . ...
Article
Full-text available
Radar sensor networks are today widely used in the field of autonomous driving and for generating high-precision images of the environment. The accuracy of the environmental representation depends to a large extent on the accurate knowledge of the sensor's mounting orientation. Both the relative orientation of the sensors to each other and the relative sensor orientation in relation to the vehicle coordinate system are determining factors. For the first time, the orientation estimation of the radar sensors of a network is possible exclusively on the basis of radar target lists without additional localization and orientation devices such as an IMU or GNSS. In this work, two algorithms for determining the orientation of incoherently networked radar sensors with respect to the vehicle coordinate system and with respect to each other are derived and characterized. With the presented algorithms orientation accuracies up to $0.25 \,\mathrm{^{\circ }}$ are achieved. Furthermore, the algorithms do not impose any requirements on the positioning or the orientation of the radar sensors, such as overlapping field of views (FOVs) or the detection of identical targets. The presented algorithms are applicable to arbitrary driving trajectories as well as for point targets and extended targets which enables the use in regular road traffic.
... A variety of sensor systems can be used for this purpose, each with its own strengths and weaknesses. LIDAR systems and cameras offer high angular resolution [1], but they lack robustness in adverse weather conditions [2]. Radar systems, on the other hand, have lower angular resolution but are more robust to weather conditions. ...
Article
Full-text available
Autonomous driving technology has made remarkable progress in recent years, revolutionizing transportation systems and paving the way for safer and more efficient journeys. One of the critical challenges in developing fully autonomous vehicles is accurate perception of the surrounding environment. Radar sensor networks provide a capability for robust environmental detection. It become apparent that the principle of a synthetic aperture radar (SAR) can be employed not only in the field of earth observation but also increasingly in the field of autonomous driving. With the help of radar sensors mounted on vehicles, huge synthetic apertures can be created and thus a high angular resolution is achieved, which ultimately allows detailed images to be obtained. Increasing image quality, however, also increases the demands on position accuracy and thus the localization of the vehicle in the map. Since relative localization accuracies in the millimeter range over long trajectories cannot be achieved with conventional Global Navigation Satellite Systems (GNSS) so-called simultaneous localization and mapping (SLAM) algorithms are often employed. This paper presents a purely radar-based SLAM algorithm, which allows high-resolution SAR processing in the automotive frequency domain of 77GHz. The presented algorithm is evaluated by measurements for trajectories with a length of up to 500m and a measurement duration of more than two minutes.
... Similarly, Hong et al. [15] extract peaks exceeding one standard deviation above the mean intensity per azimuth. Kung et al. [33] and Mielle et al. [47] keep all points exceeding a noise threshold. However, a fixed noise floor with no additional restrictions requires prior knowledge of noise level and does not mitigate multipath reflections. ...
... Recently, a combination between CFAR and fixed threshold [49] noticeably improved the odometry estimation error, however prior information about the noise level is required. Similar to most other learning-free methods [16], [17], [33], [47], our filter assumes that multipath reflections and speckle noise are observed with lower intensity compared to real landmarks. ...
Preprint
Full-text available
This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.
... If we compare two different types of RADAR and LiDAR, such as MPR (RADAR) and Velodyne VLP-16 (LiDAR), the linear range of MRP is up to 20 m and Velodyne VLP-16 LiDAR can measure up to 40 m. Moreover, the angular resolution of LiDAR and RADAR is 0.4°and 1.8°, respectively [77]. ...
Article
Full-text available
Applications of mobile robots are continuously capturing the importance in numerous areas such as agriculture, surveillance, defense and planetary exploration to name a few. Accurate navigation of a mobile robot is highly significant for its uninterrupted operation. Simultaneous localization and mapping (SLAM) is one of the widely used techniques in mobile robots for localization and navigation. SLAM consists of front and back-end processes, wherein, the front-end includes SLAM sensors. These sensors play significant role in acquiring accurate environmental information for further processing and mapping. Therefore, understanding the operational limits of the available SLAM sensors and data collection techniques from a single or multi-sensors is noteworthy. In this article, a detailed literature review of widely used SLAM sensor such as acoustic sensor, RADAR, camera, Light Detection and Ranging (LiDAR), and RGB-D is provided. The performance of SLAM sensors is compared using analytical hierarchy process (AHP) based on various key indicators such as accuracy, range, cost, working environment and computational cost.
... Similarly, Hong et al. [15] extract peaks exceeding one standard deviation above the mean intensity per azimuth. Kung et al. [33] and Mielle et al. [47] keep all points exceeding a noise threshold. However, a fixed noise floor with no additional restrictions requires prior knowledge of noise level and does not mitigate multipath reflections. ...
... Recently, a combination between CFAR and fixed threshold [49] noticeably improved the odometry estimation error, however prior information about the noise level is required. Similar to most other learning-free methods [16], [17], [33], [47], our filter assumes that multipath reflections and speckle noise are observed with lower intensity compared to real landmarks. ...
Article
This article presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments—outdoors, from urban to woodland, and indoors in warehouses and mines—without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach conservative filtering for efficient and accurate radar odometry (CFEAR), we present an in-depth investigation on a wider range of datasets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar simultaneous localization and mapping (SLAM) and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5 Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160 Hz.
... Another method is to simply set a fixed noise/signal threshold [19]. However, doing so comes with the risk to remove important information since the noise background level varies greatly in different settings. ...
... Mielle et al. [19] demonstrated radar odometry with offthe-shelf frameworks for range-based mapping in an indoor environment, using a fixed-threshold filtering method. However, the validation was done in a single, small-scale scaled power measurements CFAR threshold z min threshold k = 12 detection CFAR detection Fig. 2: Typical power/range plot for a single azimuth reading. ...
Conference Paper
Full-text available
This paper presents an accurate, highly efficient and learning free method for large-scale radar odometry estimation. By using a simple filtering technique that keeps the strongest returns, we produce a clean radar data representation and reconstruct surface normals for efficient and accurate scan matching. Registration is carried out by minimizing a point-to-line metric and robustness to outliers is achieved using a Huber loss. Drift is additionally reduced by jointly registering the latest scan to a history of keyframes. We found that our odometry pipeline generalize well to different sensor models and datasets without changing a single parameter. We evaluate our method in three widely different environments and demonstrate an improvement over spatially cross validated state-of-the-art with an overall translation error of 1.76% in a public urban radar odometry benchmark, running merely on a single laptop CPU thread at 55 Hz.
... Recently, radars, and especially spinning FMCW radars have become compact, accurate and gained popularity and demonstrated more resilient localisation and mapping [1]. This makes them suitable for applications in harsh environments; e.g., underground mines [2] and fire fighting [3] tasks. Unfortunately, radar suffer from high level of noise and clutter from multipath reflections. ...
... Some authors have used simple fixed-level thresholding instead of an adaptive method like CFAR [1], [3], [13], however, selecting a suitable threshold value is not trivial. For example, Hong [1] extracted only peaks that are greater than a standard deviation of mean intensity per azimuth. ...
Preprint
Full-text available
This paper presents a new detector for filtering noise from true detections in radar data, which improves the state of the art in radar odometry. Scanning Frequency-Modulated Continuous Wave (FMCW) radars can be useful for localization and mapping in low visibility, but return a lot of noise compared to (more commonly used) lidar, which makes the detection task more challenging. Our Bounded False-Alarm Rate (BFAR) detector is different from the classical Constant False-Alarm Rate (CFAR) detector in that it applies an affine transformation on the estimated noise level after which the parameters that minimize the estimation error can be learned. BFAR is an optimized combination between CFAR and fixed-level thresholding. Only a single parameter needs to be learned from a training dataset. We apply BFAR to the use case of radar odometry, and adapt a state-of-the-art odometry pipeline (CFEAR), replacing its original conservative filtering with BFAR. In this way we reduce the state-of-the-art translation/rotation odometry errors from 1.76%/0.5deg/100 m to 1.55%/0.46deg/100 m; an improvement of 12.5%.