Figure 2 - uploaded by Linglong Lin
Content may be subject to copyright.
Vehicle pose estimation result based on the 3D point cloud. The blue points represent static obstacle vehicles, the orange points represent dynamic obstacle vehicles, and the red bounding box represents the pose estimation result. Our approach can accurately estimate the tiny angular deviation of obstacle vehicles. As shown from the bird's eye view, our approach could effectively estimate the direction of these two dynamic obstacle vehicles even if their directions only changed slightly.

Vehicle pose estimation result based on the 3D point cloud. The blue points represent static obstacle vehicles, the orange points represent dynamic obstacle vehicles, and the red bounding box represents the pose estimation result. Our approach can accurately estimate the tiny angular deviation of obstacle vehicles. As shown from the bird's eye view, our approach could effectively estimate the direction of these two dynamic obstacle vehicles even if their directions only changed slightly.

Source publication
Article
Full-text available
Vehicle pose estimation is essential in autonomous vehicle (AV) perception technology. However, due to the different density distributions of the point cloud, it is challenging to achieve sensitive direction extraction based on 3D LiDAR by using the existing pose estimation methods. In this paper, an optimal vehicle pose estimation network based on...

Contexts in source publication

Context 1
... on our analysis, a new optimal vehicle pose estimation network based on time series and spatial tightness (TS-OVPE) is proposed in this paper. Figure 2 shows the pose estimation results of our approach. Vehicle pose estimation result based on the 3D point cloud. ...
Context 2
... the convex hull points of different detection obstacles are different, comparing the P-IoU of different obstacles is not sensible. The P-IoU is used to compare the effect of the same obstacles of different pose estimation methods in the same frame, as shown in Figure 20. Figure 20a, the value of P-IoU was only 50.02%. ...
Context 3
... P-IoU is used to compare the effect of the same obstacles of different pose estimation methods in the same frame, as shown in Figure 20. Figure 20a, the value of P-IoU was only 50.02%. As the accuracy of the direction estimation increased, the value of P-IoU increased to 60.73%, as shown in Figure 20. ...
Context 4
... P-IoU is used to compare the effect of the same obstacles of different pose estimation methods in the same frame, as shown in Figure 20. Figure 20a, the value of P-IoU was only 50.02%. As the accuracy of the direction estimation increased, the value of P-IoU increased to 60.73%, as shown in Figure 20. However, its size estimation result still had a large error at this time. ...
Context 5
... its size estimation result still had a large error at this time. When the size estimation was closer to the obstacle vehicle's actual size, the value of PIoU continuously increased, as shown in Figure 20c,d. Figure 20d shows the most accurate pose estimation results obtained with the four tested indexes. ...
Context 6
... the size estimation was closer to the obstacle vehicle's actual size, the value of PIoU continuously increased, as shown in Figure 20c,d. Figure 20d shows the most accurate pose estimation results obtained with the four tested indexes. Moreover, this index's PIoU value was also the highest at 92.25%. ...
Context 7
... accuracy of P-IoU was significantly improved by 5.25% and 9.67% in comparison to the two methods. Figure 21 shows the experimental results of one frame for each of the two road conditions. These results were consistent with the results of the P-IoU. ...
Context 8
... the accuracy of the method of [17] was less effective than that of [31], especially for the curved road sections. Although the method of [31] was much better than the method of [17] at direction estimation, it sometimes also had estimation errors, as shown in Figure 21b. Our method showed a good pose estimation robustness on both straight and curved road sections all the time. ...
Context 9
... method showed a good pose estimation robustness on both straight and curved road sections all the time. Figure 22a. In Figure 22b, the direct estimation of Method [31] is obviously wrong, and the size of the pose estimation of Method [17] is not compact enough. ...
Context 10
... 22a. In Figure 22b, the direct estimation of Method [31] is obviously wrong, and the size of the pose estimation of Method [17] is not compact enough. Although Method [31] has better overall performance than Method [17], its stability is not as good as ours. ...
Context 11
... test our approach's effect in the actual road environment, this paper tests on our "Smart Pioneer" experimental platform. The specific layout plan of the "Smart Pioneer" experimental platform is shown in Figure 23. We do not choose the same 64-line LiDAR as the SemanticKITTI dataset but used 128-line LiDAR on the "Smart Pioneer" experimental platform to verify the generalization performance of our approach. ...
Context 12
... we use the object tracking algorithm [41] to evaluate the pose estimation algorithm's effect on tracking results. The experimental results are shown in Figure 24. Compared with the two methods of comparison, our approach has certain advantages in pose estimation. ...
Context 13
... shape and position estimation results of the three methods are almost identical, as shown in Figure 24. Our method only slightly improves the performance because these two parameters depend on the original point cloud. ...
Context 14
... method only slightly improves the performance because these two parameters depend on the original point cloud. The algorithm can only get the shape from the original incomplete point cloud, as shown in Figure 25. The fundamental shape error is the same, and it is not easy to restore the complete point cloud. ...
Context 15
... to the diversity of point cloud distribution, it is hard to get perfect pose estimation results for all obstacle vehicles. As shown in Figure 21, Method [31] and Method [17] are not stable on both straight and curved road sections. We used the TS-OVPE network to determine each obstacle's best pose estimation methods from one of the five proposed methods. ...
Context 16
... pose estimation algorithms are processed in a single frame, but our direction angle association index made the method more robust than other methods. As shown in Figure 24c, the heading curve of our method was closer to the ground truth and possessed more minor fluctuation than the other methods. As shown in Figure 22, the combined effect of multiple spatial evaluation indexes enhanced our pose estimation. ...
Context 17
... shown in Figure 24c, the heading curve of our method was closer to the ground truth and possessed more minor fluctuation than the other methods. As shown in Figure 22, the combined effect of multiple spatial evaluation indexes enhanced our pose estimation. As we mentioned in the introduction, poor generalization is the biggest problem of neural networks. ...

Citations

... Dynamic objects make the surrounding point cloud different, even for a continuous frame [29,30]. The static object displacement trend is usually the same for the continuous frame in a vehicle coordinate system. ...
Article
Full-text available
Point cloud registration is a vital prerequisite for many autonomous vehicle tasks. However, balancing the accuracy and computational complexity is very challenging for existing point cloud registration algorithms. This paper proposes a fast coarse-to-fine point cloud registration approach for autonomous vehicles. Our method uses nearest neighbor sample consensus optical flow pairwise matching resulting from a 2D bird’s eye view to initialize the coarse registration. It provides an initial 2D guess matrix for the fine registration and effectively reduces the computational complexity. In two-stage registration, our approach eliminates outliers by utilizing our self-correction module, which improves the robustness without using global positioning system (GPS) information. Point cloud registration experiments show that only our approach can process in real-time (71 ms, on average) while achieving state-of-the-art accuracy on the KITTI Odometry dataset, achieving a mean relative rotation error of 0.125∘ and a mean relative translation error of 0.038 m. In addition, real-road vehicle-to-vehicle point cloud registration experiments verify that the proposed algorithm can effectively align two vehicles’ point cloud when the GPS is not synchronized. A demonstration video is available at https://youtu.be/BJTSDChQchw.
... Cont. [41] Vehicle localization based on the free-resolution probability distributions map (FRPDM) using lidar data Efficient object representation with reduced map size and good position accuracy in urban areas are achieved [42] Optimal vehicle pose estimation based on the ensemble learning network utilizing spatial tightness and time series obtained from the lidar data Improved pose estimation accuracy is obtained, even on curved roads [43] Autonomous vehicle localization based on the IMU, wheel encoder, and lidar odometry Accurate and high-frequency localization results in a diverse environment [44] Automatic recognition of road markings from mobile lidar point clouds Good performance in recognizing road markings; further research is needed for more complex markings and intersections [45] Development and implementation of a strategy for automatic extraction of road markings from the mobile lidar data based on the two-dimensional (2D) georeferenced feature images, modified inverse distance weighted (IDW) interpolation, weighted neighboring difference histogram (WNDH)-based dynamic thresholding, and multiscale tensor voting (MSTV) Experimental tests in a subtropical urban environment show more accurate and complete recognition of road markings with fewer errors [46] Automatic detection of traffic signs, road markings, and pole-shaped objects ...
... The study in [41] proposed a vehicle localization approach based on the free-resolution probability distributions map (FRPDM) generated by Gaussian mixture modeling (GMM) using 3D lidar data, allowing efficient object representations, smaller map sizes, and good position and heading estimation accuracy in the tested urban area. Moreover, the authors in [42] approached the vehicle pose estimation problem by utilizing the lidar data and ensemble learning network trained on the time series and spatial tightness evaluation indexes, improving estimation accuracy, even at curved road segments. Furthermore, the autonomous vehicle localization method based on the IMU, wheel encoder, and lidar odometry was presented in [43] and provided accurate and high-frequency results in a diverse environment. ...
Article
Full-text available
The development of light detection and ranging (lidar) technology began in the 1960s, following the invention of the laser, which represents the central component of this system, integrating laser scanning with an inertial measurement unit (IMU) and Global Positioning System (GPS). Lidar technology is spreading to many different areas of application, from those in autonomous vehicles for road detection and object recognition, to those in the maritime sector, including object detection for autonomous navigation, monitoring ocean ecosystems, mapping coastal areas, and other diverse applications. This paper presents lidar system technology and reviews its application in the modern road transportation and maritime sector. Some of the better-known lidar systems for practical applications, on which current commercial models are based, are presented, and their advantages and disadvantages are described and analyzed. Moreover, current challenges and future trends of application are discussed. This paper also provides a systematic review of recent scientific research on the application of lidar system technology and the corresponding computational algorithms for data analysis, mainly focusing on deep learning algorithms, in the modern road transportation and maritime sector, based on an extensive analysis of the available scientific literature.
... Börcs et al. [14] utilized the convex hull of the 2D shape projected by a clustering unit to determine a rectangle, and this method minimizes the sum of the distances from all hull points to the rectangle boundary. Wang et al. [15] proposed an optimal vehicle pose estimation network based on time series and spatial tightness. This network can select an optimal result from the pose estimation results generated by five different methods. ...
... In order to compare our method with other methods, we adopted four previous pose estimation methods as the comparison group. The method proposed in [15] utilizes five classical pose estimation methods to obtain candidate bounding boxes, and then four evaluation indices are calculated for each bounding box. Finally, a network whose input is a vector consisting of the values of these evaluation indices is used to select the optimal pose estimation result. ...
Article
Full-text available
This paper presents a novel dynamic vehicle tracking framework, achieving accurate pose estimation and tracking in urban environments. For vehicle tracking with laser scanners, pose estimation extracts geometric information of the target from a point cloud clustering unit, which plays an essential role in tracking tasks. However, the point cloud acquired from laser scanners only provides distance measurements to the object surface facing the sensor, leading to nonnegligible pose estimation errors. To address this issue, we take the motion information of targets as feedback to assist vehicle detection and pose estimation. In addition, the heading normalization vehicle model and a robust target size estimation method are introduced to deduce the pose of a vehicle with 2D matched filtering. Furthermore, considering the mobility of vehicles, we utilize the interactive multitude model (IMM) to capture multiple motion patterns. Compared to existing methods in the literature, our method can be applied to spatially sparse or incomplete point cloud observations. Experimental results demonstrate that our vehicle tracking framework achieves promising performance, and its real-time capability is also validated in real traffic scenarios.