Figure - available from: Intelligent Service Robotics
This content is subject to copyright. Terms and conditions apply.
Main components and basic flowchart of a SLAM system base on visual odometry

Main components and basic flowchart of a SLAM system base on visual odometry

Source publication
Article
Full-text available
A robust and accurate simultaneous localization and mapping (SLAM) in working scenarios is an essential competence to perform mobile robotic tasks autonomously. Plenty of research indicates that the extraction of point features from RGB-D data that simultaneously take into account the images and the depth data increases the robustness and precision...

Similar publications

Article
Full-text available
Underwater images typically suffer from less explicit feature point information and more redundant information due to wild conditions. To solve these degradation problems, we propose the VINS-MONO algorithm to enhance the quality of the underwater image. Specifically, we first used the FAST feature point extraction algorithm to improve the extracti...
Conference Paper
Full-text available
O 3D laser scanning tem sido vastamente empregado para apoiar a documentação do Patrimônio Cultural. Diferentes técnicas são empregadas, como: Terrestrial Laser Scanning (TLS); Simultaneous Localization and Mapping (SLAM) e Light Detection and Ranging (LiDAR). Este artigo apresenta uma Revisão Sistemática da Literatura (RSL) que busca mapear, avali...
Article
Full-text available
Simultaneous Localization and Mapping (SLAM) is the process by which a mobile robot carrying specific sensors builds a map of the environment and at the same time uses this map to estimate its pose. Currently, SLAM has been proven its value and is a hot topic. However, challenges still exist: when the mobile robot stops in the process of motion and...
Article
Full-text available
In the aim of improving the positioning accuracy of the monocular visual-inertial simultaneous localization and mapping (VI-SLAM) system, an improved initialization method with faster convergence is proposed. This approach is classified into three parts: Firstly, in the initial stage, the pure vision measurement model of ORB-SLAM is employed to mak...

Citations

... In conjunction with the aforementioned hardware, the navigation system integrates simultaneous localization and mapping (SLAM) technology to attain accurate positioning within localized areas [4]. A standard SLAM positioning system typically comprises two primary components: the front-end [5], tasked with sensing and generating equipment and software that characterizes environmental features while calculating pose information for the intelligent entity; and the back-end [6], encompassing software designed to estimate positioning errors and optimize navigation algorithms. The amalgamation of these devices and software facilitates precise positioning of intelligent agents. ...
Article
Full-text available
In this study, a cooperative navigation algorithm centered on Factor Graph Optimization - Simultaneous Localization and Mapping (FGO-SLAM) is presented for an air-ground multi-agent system. The algorithm prioritizes the control of error statuses during the position and attitude estimation procedure throughout the entire back-end optimization process. In the conventional Extended Kalman Filtering (EKF) algorithm, periodic cumulative errors may arise, introducing uncertainty to the estimation process. The application of the factor graph optimization algorithm not only mitigates deviation but also stabilizes errors, thereby eliminating the accumulation of periodic errors. In comparison to the practical EKF-SLAM, FGO-SLAM serves as a semi-offline optimization system that leverages key frames to minimize computational load. During multi-agent simulations, when two or more agents have overlapping field views, landmark data is merged, enhancing the optimization effectiveness. Through simulation experiments, the proposed algorithm demonstrates a 40% reduction in position error and a 41% reduction in attitude error, affirming the efficacy of FGO-SLAM for cooperative navigation.
... B. Fang's research group proposes a dynamic scene SLAM algorithm in view of boundary box and depth continuity; This algorithm uses a deep bounding box to fill in pixels for random search, and eliminates the influence of the target through dynamic feature filtering; The experimental results show that the positioning accuracy and real-time performance of this method in complex dynamic scenes meet the design expectations [7]. F. Hu et al. proposed an improved ORB-SLAM front-end tracking algorithm that utilizes a uniform velocity model to track effective frames and adjacent frames, and matches similar frames; Experimental data shows that this method can increase the number of effective tracking frames and reduce the computational complexity by two times [8]. J. Dong and his team members proposed an improved RGB-D SLAM scheme that utilizes ORB for feature point extraction and descriptor calculation, and matches the current frame (CF) with the map; The research results show that this method reduces the Root-mean-square (RMS) deviation by 9% on average, and improves the indexing effect of point cloud images [9]. ...
Article
Full-text available
Traditional wheeled robot vision algorithms suffer from low texture tracking failures. Therefore, this study proposes a vision improvement algorithm for mobile robots in view of multi feature fusion; This algorithm introduces line surface features and Manhattan Frame on the basis of traditional algorithms, and proposes an improved algorithm in view of multi-sensor fusion to improve tracking accuracy. The experiment shows that the average Root-mean-square deviation of the position of the improved mobile robot vision algorithm in view of multi feature fusion is 0.02 in nine data packets of the Tum dataset; The average Root-mean-square deviation of the position of the data packet successfully tracked by the traditional wheeled robot vision algorithm is 0.016; It improved the average accuracy by 11.11%, which is 31.03% higher than the average accuracy of the Manhattan wheeled robot vision algorithm. Compared to the multi feature fusion based vision improvement algorithm for mobile robots and the closed-loop detection based multi-sensor improvement algorithm, the accuracy of the closed-loop detection based multi-sensor improvement algorithm has increased by 0.655% and 10.47%, respectively. The outcomes indicate that the improved algorithm can improve the accuracy of mobile robot tracking, thereby expanding its application range.
... simultaneous localisation and mapping (SLAM) is a key technology to realise autonomous positioning and navigation of smart devices without prior information [1] and is widely used in autonomous driving, augmented reality, robots, and drones. Visual SLAM (VSLAM) is mainly divided into feature method [2]- [3] and direct method [4]- [5]. The feature method extracts features from each frame of image [6] and restores the map points (also called landmarks) according to the camera principle, and then constructs a 3D geometric relationship to estimate the camera pose using probabilistic statistics. ...
... f (i ) represents a mapping relationship, and the corresponding matching plane of the i th plane in the world coordinate system. Then the calculation is based on the plane distance calculation in (2). In the indoor environment, man-made structures have a strict parallel and vertical relationship. ...
Article
Given a large number of repetitive textures or weak texture scenes in indoor environments, SLAM (Simultaneous Localization And Mapping) systems based on point features are often prone to tracking failures. We propose a multi-constrained optimization algorithm (MCOA), which is a point-line-plane SLAM combining planar and point-line features. By adding point-line-plane features to feature extraction, matching, pose optimization, and keyframe selection, MCOA improves its performance in indoor scenes. In addition, in order to improve the operation efficiency and detection accuracy, the ORB method and local feature description are improved. Experimental results show that MCOA into ORB-SLAM2 will significantly improve the system’s robustness in weak texture scenes and positioning accuracy
... There are various public datasets, which can be used to test the performance of the vision-based algorithms, such as TUM and KITTI [17,41]. In this study, the public dataset TUM is used for testing the performance of the proposed method, as in many other related literature [16,43]. The TUM dataset is a large dataset provided by the Technical University of Munich Computer Vision Group to create a novel benchmark for visual SLAM systems and widely used in the experiments for the ORB-based SLAM method [5,31,41]. ...
Article
Full-text available
The vision-based simultaneous localization and mapping (SLAM) method is a hot spot in the robotic research field, and Oriented FAST and Rotated BRIEF (ORB) SLAM algorithm is one of the most effective methods. However, there are some problems of the general ORB-SLAM algorithm in the dynamic environment, which need to be solved further, including the control of the number of the feature points and the treatment of the dynamic objects. In this paper, an improved ORB-SLAM method is proposed for the monocular vision robot under dynamic environments. In the proposed method, a concept of reliability is proposed to mark the feature points, which can control the number of the feature points dynamically into the preset range. Then an improved frame difference method based on the partial detection strategy is used to detect the dynamic objects in the environment. In addition, a novel treatment mechanism for the tracking failure issue is introduced into the ORB-SLAM algorithm, which can improve the accuracy of the localization and mapping. Finally, the experiments on the public datasets and private datasets are conducted and the results show that the proposed method is effective.
... For instance Mur-Artal and Tardós (2017) propose an algorithm with low computational costs for SLAM and consider a strategy of loop closure to improve the mapping. Hu et al. (2020) consider a depth RGB camera (RGB-D) for visual SLAM and present a method to improve the algorithm shown by Mur-Artal and Tardós (2017), minimizing the tracking losses due to pure rotation, sudden movements and noise. In experimental results, the proposed method shows improvements in the tracking accuracy with low computational costs. ...
Article
Full-text available
The use of drones is becoming more present in modern daily life. One of the most challenging tasks associated with these vehicles is the development of perception and autonomous navigation systems. Competitions such as Artificial Intelligence Robotic Racing (AIRR) and Autonomous Drone Racing were launched to drive the advances in such strategies, requiring the development of integrated systems for autonomous navigation in a racing environment. In this context, this paper presents an improved integrated solution for autonomous drone racing, which focuses on simple, robust, and computationally efficient techniques to enable the application in embedded hardware. The strategy is divided into four modules: (i) A trajectory planner computes a path that passes through the sequence of desired gates; (ii) a perception system that obtains the global pose of the vehicle by using an onboard camera; (iii) a localization system which merges several sensed information to estimate the drone’s states; and, (iv) an artificial vector field-based controller in charge of following the plan by using the estimated states. To evaluate the performance of the entire integrated system, we extended the FlightGoggles simulator to use gates similar to those used in the AIRR competition. A computational cost analysis demonstrates the high performance of the proposed perception method running on limited hardware commonly embedded in aerial robots. In addition, we evaluate the localization system by comparing it with ground truth and when there is a disturbance in the location of the gates. Results in a representative race circuit based on the AIRR competition showed the proposed strategy’s competing performance, presenting itself as a feasible solution for drone racing systems.
... Simultaneous Localization and Mapping (SLAM) was proposed in 1988, and it represented an ability of the robot to construct an environmental map and estimate robot's pose on this map simultaneously [1]. This ability is the basis of achieving intelligent navigation for mobile robots, unmanned aerial vehicles (UAVs), autonomous guided vehicles (AGVs), and augmented reality (AR) [2]. Therefore, the SLAM has become an important technology in the field of robotics. ...
Article
Full-text available
For a moving robot based on visual Simultaneous Localization and Mapping (SLAM), blurred images will degrade the accuracy of localization. Therefore, how to handle blurred images is a main problem in visual SLAM. In order to decrease the influence of blurred images on localization accuracy, this paper proposes an improved visual SLAM, which is based on Haar wavelet transform and has the ability of eliminating blurred images. Besides, a correlation-weighted pose optimization is also developed in this paper. This weighted optimization integrates the correlation between matching features as weighting coefficients into the reprojection errors. In this weighted method, pose optimization algorithm can reduce the influence of the matching features with low correlation, which are more likely to be mismatched. As a result, the accuracy of the estimated pose will be improved. The improved system optimized by our method is evaluated on the TUM RGB-D dataset and real environment. It is also compared with other optimization systems, which were based on blurred image elimination and uncertainty-weighted optimization respectively. The experimental results demonstrate that the system optimized by our method could achieve the highest accuracy and robustness in pose estimation.