Figure - available from: Autonomous Robots
This content is subject to copyright. Terms and conditions apply.
Comparison of results with/without aiding from an IMU. A person holds the lidar and walks on a staircase. The black dot is the starting point. In a, the red curve is computed using orientation from the IMU and translation estimated by our method, the green curve relies on the optimization in our method only, and the blue curve uses the IMU data for preprocessing followed by the method. b is the map corresponding to the blue curve. In c, the upper and lower figures correspond to the blue and green curves, respectively, using the region labeled by the yellow rectangle in b. The edges in the upper figure are sharper, indicating more accuracy on the map (Color figure online)

Comparison of results with/without aiding from an IMU. A person holds the lidar and walks on a staircase. The black dot is the starting point. In a, the red curve is computed using orientation from the IMU and translation estimated by our method, the green curve relies on the optimization in our method only, and the blue curve uses the IMU data for preprocessing followed by the method. b is the map corresponding to the blue curve. In c, the upper and lower figures correspond to the blue and green curves, respectively, using the region labeled by the yellow rectangle in b. The edges in the upper figure are sharper, indicating more accuracy on the map (Color figure online)

Source publication
Article
Full-text available
Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting poi...

Similar publications

Article
Full-text available
Indoor Mobile Mapping Systems (IMMS) are attracting growing attention, especially when LiDAR sensors are considered, thanks to the possibility to obtain wide range and complete data in those areas where GNSS signal is not available. However, the drift error that accumulates during the acquisition is often inadequate in the absence of quality constr...

Citations

... We assume that is localized in the indoor environment. For robot localization using Lidar, can use Lidar simultaneous localization and mapping (SLAM) in [24,25]. We further assume that can measure the relative position of . ...
... This paper assumes that the large SR is localized in the indoor environment. For robot localization using Lidar, we can use Lidar SLAM in [24,25]. As we do experiments using real robots, we will integrate robot localization for verifying the effectiveness of the proposed dual-robot system rigorously. ...
... In feature-based SLAM solutions, the extraction of reliable and repeatable features, also referred to as landmarks, is key to their success. Many SLAM solutions contain their own feature extractors such as the LiDAR Odometry and Mapping (LOAM) [4] algorithm, which became the second highest ranked SLAM algorithm in the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) Visual Odometry/SLAM Evaluations [5]. More recently, optimized versions of the LOAM algorithm were published, such as the Advanced (A)-LOAM [6] and Fast (F)-LOAM [7] algorithms. ...
... A full explanation of the notation used here is given in the Nomenclature. The LoFE-LOAM algorithm can be divided into four components as shown in Fig. 1, [4]. Figure 1 schematically demonstrates the steps used within the LoFE-LOAM algorithm. ...
... Then, according to [4], the FE further divides each C (i) k into n segments 2 smaller point cloud arrays (i.e. S (1) k , S ...
Article
Full-text available
The LiDAR Odometry and Mapping (LOAM) algorithm ranks in second place in the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI), Visual Odometry/SLAM Evaluations. It utilizes a feature extraction algorithm based on the evaluation of the curvature of points under test, to produce estimated smooth and non-smooth regions within typically laser based Point Cloud Data (PCD). This feature extractor (FE) however, does not take into account PCD spatial or detection uncertainty, which can result in the divergence of the LOAM algorithm. Therefore, this article proposes the use of the Curvature Scale Space (CSS) algorithm as a replacement for LOAM’s current feature extractor. It justifies the substitution, based on the CSS algorithm’s similar computational complexity but improved feature detection repeatability. LOAM’s current feature extractor and the proposed CSS feature extractor are tested and compared with simulated and real data, including the KITTI odometry-laser data set. Additionally, a recent deep learning based LiDAR Odometry (LO) algorithm, the Convolutional Auto-Encoder (CAE)-LO algorithm, will also be compared, using this data set, in terms of its computational speed and performance. Performance comparisons are made based on the Absolute Trajectory Error (ATE) and Cardinalized Optimal Linear Assignment (COLA) metrics. Based on these metrics, the comparisons show significant improvements of the LOAM algorithm with the CSS feature extractor compared with the benchmark versions.
... We used raw data from measurement campaigns in the cities of Aachen, Düsseldorf, and Cologne to evaluate the proposed approach by benchmarking with a well-known lidarcentric SLAM approach [2,17]. This lidar-centric SLAM has been shown to perform best for vehicle localization tasks in large-scale environments and can be configured to fuse GNSS measurements [18], which presents an equivalent fusion mechanism as our loosely coupled gnssFGO. ...
... 3) Lidar Odometry: We adapt the feature extraction and matching methods from a feature-based lidar odometry [2,17] to obtain the relative motion increments between two laser keyframes. The coordinates of raw lidar points acquired in different timestamps are re-calibrated using the IMU measurement to the original timestamp of the lidar scan. ...
... The coordinates of raw lidar points acquired in different timestamps are re-calibrated using the IMU measurement to the original timestamp of the lidar scan. We classify the calibrated points into edge and planar features, F t = {F e t , F p t }, based on the smoothness metric shown in [17,66]. In scan registration, all k features in F t+1 of the current scan are associated with pose priors T w t+1,1:k and used to find the best transformation ∆T w t,t+1 from the last laser scan by solving an optimization problem that takes the distance between the corresponding features in F t using a Gauss-Newton algorithm. ...
... Furthermore, in order to achieve reliable and accurate state estimates, Lidar is considered in conjunction with IMU and GPS. In [20], a loosely coupled method is proposed using Lidar and IMU date. IMU is used to give a motion prior to Lidar registration. ...
Article
Localization is a basic technology for intelligent vehicle (IV), which is usually carried out by fusing multiple sensors. In order to achieve robust and accurate localization results, a novel adaptive multi-sensor fusion method is proposed. For each sensor, every measurement is identified by an indicator, which is used to recognize whether the measurement has the useful information to improve the localization performance. A robust localization model of IV is then developed by using variational Bayesian approach. Simulations and experiments using a real IV are used to demonstrate the potential and effectiveness of the proposed method.
... effectively execute mobile operations. Up to now, significant strides have been made in simultaneous localization and mapping techniques [1], and a variety of sensors, like LIDAR, depth cameras, monocular or binocular cameras, UWB, gyroscopes, etc, have been widely used as tools to assist robot localization, where vision-based localization methods [2][3][4] and radar-based localization methods [5,6] are two popular research directions. The approach of multi-sensor fusion localization [7,8] is a further exploration and study built upon the foundations of the aforementioned two methods. ...
... In recent years, there have been numerous methods for LIDAR-based positional estimation and mapping, and a lidar odometry and mapping (LOAM) scheme was proposed in [5] that combines a high-frequency ranging algorithm with a low-frequency matching algorithm for ranging. This framework heavily relies on geometric features in the environment for scan matching. ...
... Firstly, we constructed an objective residual function composed of geometric features and Gaussian distribution characteristics. Our approach differs from the contributions in the literature [5,6,8,14,15], where the proposed residual functions only contain descriptions of individual property features. In contrast, we pioneered the concept of utilizing tightly coupled geometric and distribution features for point cloud registration. ...
Article
Full-text available
We propose a novel point cloud alignment algorithm, namely PCR-DAT, for radar inertial ranging and localization. In environments with complex feature variations, the distribution trend of features is always changing, and the traditional alignment algorithms often fall into local optimums when dealing with regional point clouds with a combination of rich and sparse feature points, thus affecting the accuracy and stability of point cloud alignment. This paper addresses this issue by constructing a cost function composed of distance factors obtained from lidar measurements, normal distribution factors, and IMU pre-integration measurement factors. The core idea involves analyzing and classifying features in the target environment, defining different residual factors based on feature categories. Sparse features correspond to distance factors, while rich features correspond to distribution factors. Subsequently, a nonlinear optimization process is employed to estimate the robot’s pose. We evaluate the accuracy and robustness of the algorithm in various scenarios, including experiments on the KITTI dataset and field data collected during UGV movement. The results demonstrate that the DAT point cloud registration algorithm effectively addresses the pose prediction problem in the presence of feature degradation.
... For each data acquisition scene, based on the first data collected, the LiDAR Odometry and Mapping (LOAM) algorithm [46] is used to construct an a priori point cloud map. Because the secondary collected LiDAR point cloud encompasses many dynamic targets, errors will arise when directly detecting dynamic targets directly using the scan-to-map approach [47] to register a single-frame point cloud and a point cloud map. ...
Article
Full-text available
To address the limitations of LiDAR dynamic target detection methods, which often require heuristic thresholding, indirect computational assistance, supplementary sensor data, or postdetection, we propose an innovative method based on multidimensional features. Using the differences between the positions and geometric structures of point cloud clusters scanned by the same target in adjacent frame point clouds, the motion states of the point cloud clusters are comprehensively evaluated. To enable the automatic precision pairing of point cloud clusters from adjacent frames of the same target, a double registration algorithm is proposed for point cloud cluster centroids. The iterative closest point (ICP) algorithm is employed for approximate interframe pose estimation during coarse registration. The random sample consensus (RANSAC) and four-parameter transformation algorithms are employed to obtain precise interframe pose relations during fine registration. These processes standardize the coordinate systems of adjacent point clouds and facilitate the association of point cloud clusters from the same target. Based on the paired point cloud cluster, a classification feature system is used to construct the XGBoost decision tree. To enhance the XGBoost training efficiency, a Spearman’s rank correlation coefficient-bidirectional search for a dimensionality reduction algorithm is proposed to expedite the optimal classification feature subset construction. After preliminary outcomes are generated by XGBoost, a double Boyer–Moore voting-sliding window algorithm is proposed to refine the final LiDAR dynamic target detection accuracy. To validate the efficacy and efficiency of our method in LiDAR dynamic target detection, an experimental platform is established. Real-world data are collected and pertinent experiments are designed. The experimental results illustrate the soundness of our method. The LiDAR dynamic target correct detection rate is 92.41%, the static target error detection rate is 1.43%, and the detection efficiency is 0.0299 s. Our method exhibits notable advantages over open-source comparative methods, achieving highly efficient and precise LiDAR dynamic target detection.
... There are mainly laser navigation, visual navigation, and other methods for robot navigation, among which laser SLAM navigation has made rapid development in recent years. The cartographer algorithm proposed by Hess scans the map using a branch and bound algorithm to construct loop constraints and reduce the error of the laser odometry [2], [3], utilizes the extraction of geometric features from point clouds, and establishes constraints on feature points to solve pose problems. To increase the robustness of positioning, sensors such as IMU and wheel odometers are used to improve the accuracy and robustness of robot positioning [4]. ...
Chapter
Full-text available
This paper presents a method for mobile robots to recognize and dock shelves based on 2D LiDAR, which can autonomously identify shelves and perform high-precision docking. A geometric feature-based shelf leg recognition and multi-shelf model screening method is designed, which can quickly and accurately autonomously identify shelves. A fitting method for the center of shelf legs and shelf centers based on nonlinear least squares was proposed, and a real-time autonomous docking system was designed, which can autonomously distinguish shelf size types and perform precise docking. The experiment shows that the proposed method can achieve a docking accuracy of ± 1 cm.
... Registration-based methods employ matching scan strategies to track the vehicle's movements across consecutive sequences. Examples of registration-based methods include the LOAM method [11], which distinguishes edges and flat planar features for motion tracking between consecutive sequences. However, the LOAM method can be time-consuming to execute due to noisy and irrelevant features in LiDAR measurements. ...
... To evaluate how well the model generalizes to new data, the Kitti dataset is used for testing, using long-term trajectories that were not seen during training. The results obtained were compared by our model, which uses a single hidden layer with one neuron (of two coordinates) for position determination, with those of other advanced methods such as LOAM [11], LOAM Velodyne [28], LeGO-LOAM [12], A-LOAM [13], SGLO [15], LO-Net and LO-Net-M [14]. ...
... These methods mainly rely on registration techniques to align the LiDAR point clouds and estimate the positions. Several state-of-the-art methods, namely LOAM [11], LOAM Velodyne [28], LeGO-LOAM [12], A-LOAM [13], SGLO [15], LO-Net, and LO-Net-M [14], are compared in this study. These methods have demonstrated promising results across different environments and scenarios. ...
... As research in the field of 3D LiDAR continues to advance [5,6], the exploration of SLAM technology based on this has gained significant momentum across various domains [7][8][9][10][11][12][13][14][15][16][17]. Consequently, numerous remarkable algorithms have been developed [18][19][20][21][22][23][24][25][26], with the most widely adopted ones being LOAM(LiDAR Odometry and The presence of dynamic objects can make it inevitable that the scan data from 3D LiDAR includes representations of them [31][32][33][34][35], thereby significantly impeding the effectiveness of SLAM algorithm implementation, with three main detrimental effects as shown below: ...
Article
Full-text available
SLAM (Simultaneous Localization and Mapping) based on 3D LiDAR (Laser Detection and Ranging) is an expanding field of research with numerous applications in the areas of autonomous driving, mobile robotics, and UAVs (Unmanned Aerial Vehicles). However, in most real-world scenarios, dynamic objects can negatively impact the accuracy and robustness of SLAM. In recent years, the challenge of achieving optimal SLAM performance in dynamic environments has led to the emergence of various research efforts, but there has been relatively little relevant review. This work delves into the development process and current state of SLAM based on 3D LiDAR in dynamic environments. After analyzing the necessity and importance of filtering dynamic objects in SLAM, this paper is developed from two dimensions. At the solution-oriented level, mainstream methods of filtering dynamic targets in 3D point cloud are introduced in detail, such as the ray-tracing-based approach, the visibility-based approach, the segmentation-based approach, and others. Then, at the problem-oriented level, this paper classifies dynamic objects and summarizes the corresponding processing strategies for different categories in the SLAM framework, such as online real-time filtering, post-processing after the mapping, and Long-term SLAM. Finally, the development trends and research directions of dynamic object filtering in SLAM based on 3D LiDAR are discussed and predicted.
... LiDAR Odometry and Mapping in Real-time (LOAM) is the most representative real-time 3D laser SLAM algorithm based on feature matching [26,27]. It has a small amount of computation and motion compensation, but there is no loopback detection and back-end graph optimization, and in complex orchard environments, the position error will accumulate over time, leading to low positioning accuracy or even positioning failure. ...
Article
Full-text available
To solve the problem of cumulative errors when robots build maps in complex orchard environments due to their large scene size, similar features, and unstable motion, this study proposes a loopback registration algorithm based on the fusion of Faster Generalized Iterative Closest Point (Faster_GICP) and Normal Distributions Transform (NDT). First, the algorithm creates a K-Dimensional tree (KD-Tree) structure to eliminate the dynamic obstacle point clouds. Then, the method uses a two-step point filter to reduce the number of feature points of the current frame used for matching and the number of data used for optimization. It also calculates the matching degree of normal distribution probability by meshing the point cloud, and optimizes the precision registration using the Hessian matrix method. In the complex orchard environment with multiple loopback events, the root mean square error and standard deviation of the trajectory of the LeGO-LOAM-FN algorithm are 0.45 m and 0.26 m which are 67% and 73% higher than those of the loopback registration algorithm in the Lightweight and Ground-Optimized LiDAR Odometry and Mapping on Variable Terrain (LeGO-LOAM), respectively. The study proves that this method effectively reduces the influence of the cumulative error, and provides technical support for intelligent operation in the orchard environment.