Figure - available from: Multimedia Tools and Applications
This content is subject to copyright. Terms and conditions apply.
A block diagram illustrating the working of visual odometry (VO) using PnP. Here, the frames at time t corresponds to a keyframe where point registration is performed: feature extraction, matching and triangulation; while for the frames at t + 1, either left or right frame is consistently used for tracking such that the 2D tracked points along with corresponding 3D points are fed into PnP for pose estimation

A block diagram illustrating the working of visual odometry (VO) using PnP. Here, the frames at time t corresponds to a keyframe where point registration is performed: feature extraction, matching and triangulation; while for the frames at t + 1, either left or right frame is consistently used for tracking such that the 2D tracked points along with corresponding 3D points are fed into PnP for pose estimation

Source publication
Article
Full-text available
Visual odometry in the field of computer vision and robotics is a well-known approach with which the position and orientation of an agent can be obtained using only images from a camera or multiple of them. In most traditional point feature-based visual odometry, one important assumption and also an ideal condition is that the scene remains static....

Similar publications

Article
Full-text available
Given the lack of scale information of the image features detected by the visual SLAM (simultaneous localization and mapping) algorithm, the accumulation of many features lacking depth information will cause scale blur, which will lead to degradation and tracking failure. In this paper, we introduce the lidar point cloud to provide additional depth...
Article
Full-text available
The autonomous movement of the mobile robotic system is a complex problem. If there are dynamic objects in the space when performing this task, the complexity of the solution increases. To avoid collisions, it is necessary to implement a suitable detection algorithm and adjust the trajectory of the robotic system. This work deals with the design of...
Article
Full-text available
Loop detection technology is an important part of the simultaneous localization and mapping system for eliminating the pose drift of robots during long-term movement. In order to solve the three main challenges of appearance-based methods, namely viewpoint changes, repeated textures, and large amounts of calculation, this paper proposes an unsuperv...

Citations

... The framework of V-SLAM is mainly divided into two parts: visual front-end and back-end optimization. The visual front-end is also commonly referred to visual odometry (VO) [13,14]. The main function of VO is to calculate the rough poses of the camera and generate new map points. ...
Article
Full-text available
The visual simultaneous localization and mapping (SLAM) method under dynamic environments is a hot and challenging issue in the robotic field. The oriented FAST and Rotated BRIEF (ORB) SLAM algorithm is one of the most effective methods. However, the traditional ORB-SLAM algorithm cannot perform well in dynamic environments due to the feature points of dynamic map points at different timestamps being incorrectly matched. To deal with this problem, an improved visual SLAM method built on ORB-SLAM3 is proposed in this paper. In the proposed method, an improved new map points screening strategy and the repeated exiting map points elimination strategy are presented and combined to identify obvious dynamic map points. Then, a concept of map point reliability is introduced in the ORB-SLAM3 framework. Based on the proposed reliability calculation of the map points, a multi-period check strategy is used to identify the unobvious dynamic map points, which can further deal with the dynamic problem in visual SLAM, for those unobvious dynamic objects. Finally, various experiments are conducted on the challenging dynamic sequences of the TUM RGB-D dataset to evaluate the performance of our visual SLAM method. The experimental results demonstrate that our SLAM method can run at an average time of 17.51 ms per frame. Compared with ORB-SLAM3, the average RMSE of the absolute trajectory error (ATE) of the proposed method in nine dynamic sequences of the TUM RGB-D dataset can be reduced by 63.31%. Compared with the real-time dynamic SLAM methods, the proposed method can obtain state-of-the-art performance. The results prove that the proposed method is a real-time visual SLAM, which is effective in dynamic environments.