Figure 8 - uploaded by Nam Dinh
Content may be subject to copyright.
Localization through the point cloud map fused with the HD map under a tunnel and toll gate.

Localization through the point cloud map fused with the HD map under a tunnel and toll gate.

Source publication
Article
Full-text available
In recent years, research and development of autonomous driving technology have gained much interest. Many autonomous driving frameworks have been developed in the past. However, building a safely operating fully functional autonomous driving framework is still a challenge. Several accidents have been occurred with autonomous vehicles, including Te...

Context in source publication

Context 1
... PCM matching error is less than 30 cm. The fused point cloud with HD map is used for the localization of a vehicle under the tunnel and toll gate, as shown in Figure 8. ...

Citations

... In principle, vision-centric perception can obtain the richest semantic information, which is essential for decision-making in autonomous driving, compared to LiDAR-based and millimeter-wave radar-based perception. Moreover, we found a large body of previous research on vision-based perception for various autonomous driving tasks in past years, such as 3D detection [1][2][3][4][5][6][7][8][9][10] in driving scenes, map construction [11][12][13][14][15], motion prediction [16,17], and even end-to-end autonomous driving [18][19][20]. ...
Article
Full-text available
In recent years, vision-centric perception has played a crucial role in autonomous driving tasks, encompassing functions such as 3D detection, map construction, and motion forecasting. However, the deployment of vision-centric approaches in practical scenarios is hindered by substantial latency, often deviating significantly from the outcomes achieved through offline training. This disparity arises from the fact that conventional benchmarks for autonomous driving perception predominantly conduct offline evaluations, thereby largely overlooking the latency concerns prevalent in real-world deployment. Although a few benchmarks have been proposed to address this limitation by introducing effective evaluation methods for online perception, they do not adequately consider the intricacies introduced by the complexity of input information streams. To address this gap, we propose the Autonomous driving Streaming I/O (ASIO) benchmark, aiming to assess the streaming input characteristics and online performance of vision-centric perception in autonomous driving. To facilitate this evaluation across diverse streaming inputs, we initially establish a dataset based on the CARLA Leaderboard. In alignment with real-world deployment considerations, we further develop evaluation metrics based on information complexity specifically tailored for streaming inputs and streaming performance. Experimental results indicate significant variations in model performance and ranking under different major camera deployments, underscoring the necessity of thoroughly accounting for the influences of model latency and streaming input characteristics during real-world deployment. To enhance streaming performance consistently across distinct streaming input features, we introduce a backbone switcher based on the identified streaming input characteristics. Experimental validation demonstrates its efficacy in perpetually improving streaming performance across varying streaming input features.
... In principle, vision-centric perception can obtain the richest semantic information, which is essential for decisionmaking in autonomous driving, compared to LiDAR-based and millimeter-wave radar-based perception. Moreover, we found a large body of previous research on vision-based perception for various autonomous driving tasks in past years, such as 3D detection [1][2][3][4][5][6][7][8][9][10] in driving scenes, map construction [11][12][13][14][15], motion prediction [16,17], and even end-to-end autonomous driving [18][19][20]. ...
Preprint
Full-text available
In recent years, vision-centric perception has played a crucial role in autonomous driving tasks, encompassing functions such as 3D detection, map construction, and motion forecasting. However, the deployment of vision-centric approaches in practical scenarios is hindered by substantial latency, often deviating significantly from the outcomes achieved through offline training. This disparity arises from the fact that conventional benchmarks for autonomous driving perception predominantly conduct offline evaluations, thereby largely overlooking the latency concerns prevalent in real-world deployment. While a few benchmarks have been proposed to address this limitation by introducing effective evaluation methods for online perception, they do not adequately consider the intricacies introduced by the complexity of input information streams. To address this gap, we propose the Autonomous-driving Streaming I/O (ASIO) benchmark, aiming to assess the streaming inputs characteristics and online performance of vision-centric perception in autonomous driving. To facilitate this evaluation across diverse streaming inputs, we initially establish a dataset based on the CARLA Leaderboard. In alignment with real-world deployment considerations, we further develop evaluation metrics based on information complexity specifically tailored for streaming inputs and streaming performance. Experimental results indicate significant variations in model performance and ranking under different major camera deployments, underscoring the necessity of thoroughly accounting for the influences of model latency and streaming inputs characteristics during real-world deployment. To enhance streaming performance consistently across distinct streaming inputs features, we introduce a backbone switcher based on the identified streaming inputs characteristics. Experimental validation demonstrates its efficacy in perpetually improving streaming performance across varying streaming inputs features.
... Clothoids have been studied deeply in ground mobile robotics [24][25][26][27][28][29]. For instance, in [30][31][32] the authors showed that the use of clothoids as transition curves not only guaranteed continuous curvature ( 2 continuity) but also a bound on its derivative, the sharpness. ...
... concerns AD in urban environments [15]. Compared with highway driving or lane following, urban environments pose additional obstacles due to the unpredictability and variety of agents present in the scene, as well as complex and uncertain situations, such as pedestrians crossing lanes, traffic-lights, intersections, among others. ...
Article
Full-text available
Autonomous driving in urban environments requires intelligent systems that are able to deal with complex and unpredictable scenarios. Traditional modular approaches focus on dividing the driving task into standard modules, and then use rule-based methods to connect those different modules. As such, these approaches require a significant effort to design architectures that combine all system components, and are often prone to error propagation throughout the pipeline. Recently, end-to-end autonomous driving systems have formulated the autonomous driving problem as an end-to-end learning process, with the goal of developing a policy that transforms sensory data into vehicle control commands. Despite promising results, the majority of end-to-end works in autonomous driving focus on simple driving tasks, such as lane-following, which do not fully capture the intricacies of driving in urban environments. The main contribution of this paper is to provide a detailed comparison between end-to-end autonomous driving systems that tackle urban environments. This analysis comprises two stages: a) a description of the main characteristics of the successful end-to-end approaches in urban environments; b) a quantitative comparison based on two CARLA simulator benchmarks ( CoRL2017 and NoCrash ). Beyond providing a detailed overview of the existent approaches, we conclude this work with the most promising aspects of end-to-end autonomous driving approaches suitable for urban environments.
... As autonomous vehicles have been developing very intensively, recently Arshad S. et al. [1] presented Clothoid-a unified framework for fully autonomous vehicles. In the literature, there are many solutions for autonomous driving frameworks. ...
Article
Full-text available
Autonomous vehicle navigation has been at the center of several major developments, both in civilian and defense applications [...].
... Due to the raindrops adhered to a glass window or camera lens, the images captured in rainy weather suffer from poor visibility, which poses significant risks to many outdoor computer vision tasks, such as pedestrian detection [1,2], crowd counting [3], and person re-identification [4]. Therefore, removing raindrops from rainy images is highly desirable, especially in complicated outdoor scenes. ...
Article
Full-text available
Removing raindrops from a single image is a challenging problem due to the complex changes in shape, scale, and transparency among raindrops. Previous explorations have mainly been limited in two ways. First, publicly available raindrop image datasets have limited capacity in terms of modeling raindrop characteristics (e.g., raindrop collision and fusion) in real-world scenes. Second, recent deraining methods tend to apply shape-invariant filters to cope with diverse rainy images and fail to remove raindrops that are especially varied in shape and scale. In this paper, we address these raindrop removal problems from two perspectives. First, we establish a large-scale dataset named RaindropCityscapes, which includes 11,583 pairs of raindrop and raindrop-free images, covering a wide variety of raindrops and background scenarios. Second, a two-branch Multi-scale Shape Adaptive Network (MSANet) is proposed to detect and remove diverse raindrops, effectively filtering the occluded raindrop regions and keeping the clean background well-preserved. Extensive experiments on synthetic and real-world datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art raindrop removal methods. Moreover, the extension of our method towards the rainy image segmentation and detection tasks validates the practicality of the proposed method in outdoor applications.
Article
Highly-dynamic (HD) map is an indispensable building block in the future of autonomous driving, allowing for fine-grained environmental awareness, precise localization, and route planning. However, since HD maps include rich, multidimensional information, the volume of HD map data is substantial and cannot be transmitted frequently by several vehicles over vehicular networks in real-time. Therefore, in this paper, we propose a data source selection scheme for effective HD map transmissions in vehicular named data networking (NDN) scenarios. To achieve our goal, we created a vehicular NDN environment for data collection, processing, and transmission using the CARLA simulator and robot operating system 2 (ROS2). Next, due to our vehicular NDN’s dynamic and complex nature, we formulate the data source selection problem as a Markov decision process (MDP) and solve it using a reinforcement learning approach. For simplicity, we termed our proposed scheme data source optimization with reinforcement learning (DSORL), which selects suitable vehicles for HD map data transmission to MEC servers. The experiment results indicate that our suggested method outperformed existing baseline schemes, such as RLSS, Pro-RTT, and HDM-RTT, across all performance criteria in the evaluation. For instance, the system throughput increases by $65\%-72.68\%$ compared to other baseline systems. Similarly, the proposed approach can minimize packet loss rate, data size, and transmission time by up to $60.6\%$ , $77.5\%$ , and $54.1\%$ , respectively.
Chapter
The autonomous driving ability is provided by the autonomous driving system based on blocks consisting of combinations of functions and properties that define the characteristics of the environment and the objects found in this environment.
Chapter
Autonomous vehicles (especially autonomous shuttle buses) operate like a vehicle on rails that travels on a well-established route and stops at stations on the route. The main difference is the road, which in the case of autonomous vehicles is a virtual route, a line defined by geographical coordinates (latitude, longitude, altitude).