Figure - available from: International Journal of Intelligent Robotics and Applications
This content is subject to copyright. Terms and conditions apply.
Block diagram of proposed loosely-coupled lidar-inertial odometry

Block diagram of proposed loosely-coupled lidar-inertial odometry

Source publication
Article
Full-text available
The lidar-inertial-based simultaneous localization and mapping (SLAM) have been widely investigated in recent years. In this paper, a loosely-coupled lidar-inertial odometry and mapping method is developed for robot state estimation in real-time. The proposed lidar-inertial odometry is a loosely-coupled and nonlinear optimization-based method, fusi...

Similar publications

Article
Full-text available
Accurate positioning of vehicles is a critical element of autonomous and connected vehicle systems. Most of other studies heavily focused on enhancing simultaneous localization and mapping (SLAM) methods, i.e., constructing or updating a map of an unknown environment and tracking an object within the map. This paper provides a method that can, in a...
Article
Full-text available
Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference...
Preprint
Full-text available
Simultaneous localisation and mapping (SLAM) play a vital role in autonomous robotics. Robotic platforms are often resource-constrained, and this limitation motivates resource-efficient SLAM implementations. While sparse visual SLAM algorithms offer good accuracy for modest hardware requirements, even these more scalable sparse approaches face limi...
Article
Full-text available
Focus on the complex environment in the wild, we designed a positioning system for agricultural environment. This system has multiple parts such as SLAM robot, remote console, mobile monitoring terminal, cloud monitoring platform, which fully fulfills the requirements of the dynamic positioning system. The system takes into account the characterist...
Article
Full-text available
The existing algorithms are mainly based on the laser or visual inertial odometer, which has problems such as large particle recommendation distribution error, particle consumption, and long running time of the algorithm. An SLAM scheme based on multisensor fusion is proposed, which integrates multisource sensor data including lidar, camera, and IM...

Citations

... For example, if biases are assumed to be less than 0.5 m/s 2 and their estimates are initialized to 0, then static gravity calibration can be off by up to 3 • . While many works naively assume a constant gravity vector [2], [4], [14], [15], there has been recent interest in constructing a more accurate representation by augmenting a system's state estimator to include gravity as a free variable. One popular approach for estimating the gravity vector is to use a probabilistic filter that combines accelerometer measurements and a priori knowledge of the local gravity vector. ...
... By assuming that the estimates d 1 and d 2 are always close to each other (≤ 5 • ), the Jacobians can be approximated quite well by (15), which is sufficient to make optimization work. ...
Preprint
Aligning a robot's trajectory or map to the inertial frame is a critical capability that is often difficult to do accurately even though inertial measurement units (IMUs) can observe absolute roll and pitch with respect to gravity. Accelerometer biases and scale factor errors from the IMU's initial calibration are often the major source of inaccuracies when aligning the robot's odometry frame with the inertial frame, especially for low-grade IMUs. Practically, one would simultaneously estimate the true gravity vector, accelerometer biases, and scale factor to improve measurement quality but these quantities are not observable unless the IMU is sufficiently excited. While several methods estimate accelerometer bias and gravity, they do not explicitly address the observability issue nor do they estimate scale factor. We present a fixed-lag factor-graph-based estimator to address both of these issues. In addition to estimating accelerometer scale factor, our method mitigates limited observability by optimizing over a time window an order of magnitude larger than existing methods with significantly lower computational burden. The proposed method, which estimates accelerometer intrinsics and gravity separately from the other states, is enabled by a novel, velocity-agnostic measurement model for intrinsics and gravity, as well as a new method for gravity vector optimization on S2. Accurate IMU state prediction, gravity-alignment, and roll/pitch drift correction are experimentally demonstrated on public and self-collected datasets in diverse environments.
Article
The significant achievements have been made in crowd detection and tracking due to the advancement of artificial intelligence in the autonomous driving. However, the image-based methods have strict requirements for the collection conditions of video, and the development of the new generation of flexible fabrics has become potential sensors to perceive context. In this paper, an intelligent fabric space enabled by multi-sensing sensors is established to track the motion objects. We propose a behavior analysis pipeline including the modules of data preparation, trajectory coupling, motion scenario segmentation, and motion pattern measurement to capture the crowd information from micro-level and macro-level over the intelligent fabric space. After making preprocess for the multi-sensing data, a coupling mechanism is formulated to fuse the video-based trajectory and fabric-based trajectory. And an automatic motion scenario segmentation model divides the surrounding scenario into main-crowd, sub-crowd, and background according to the motion behavior. Further, we define measurement metrics to analyze the motion pattern for the different crowds. Extensive experiments prove that our proposed methods effectively fuse multiple trajectories and realize the crowd segmentation and the motion description. This will greatly help autonomous vehicles and control system perceive the surrounding pedestrians and the environment to make precise driving decisions.
Article
Full-text available
Simultaneous Localization and Mapping (SLAM) is an essential feature in many applications of mobile vehicles. To solve the problem of poor positioning accuracy, single use of mapping scene, and unclear structural characteristics in indoor and outdoor SLAM, a new framework of tight coupling of dual lidar inertial odometry is proposed in this paper. Firstly, through external calibration and an adaptive timestamp synchronization algorithm, the horizontal and vertical lidar data are fused, which compensates for the narrow vertical field of view (FOV) of the lidar and makes the characteristics of vertical direction more complete in the mapping process. Secondly, the dual lidar data is tightly coupled with an Inertial Measurement Unit (IMU) to eliminate the motion distortion of the dual lidar odometry. Then, the value of the lidar odometry after correcting distortion and the pre-integrated value of IMU are used as constraints to establish a non-linear least-squares objective function. Joint optimization is then performed to obtain the best value of the IMU state values, which will be used to predict the state of IMU at the next time step. Finally, experimental results are presented to verify the effectiveness of the proposed method.