Fig 2 - uploaded by Simone Mentasti
Content may be subject to copyright.
Bird-eye-view of point cloud with 6 channel features

Bird-eye-view of point cloud with 6 channel features

Similar publications

Article
Full-text available
Airborne laser scanning (ALS) has gained importance over recent decades for multiple uses related to the cartography of landscapes. Processing ALS data over large areas for forest resource estimation and ecological assessments requires efficient algorithms to filter out some points from the raw data and remove human-made structures that would other...

Citations

... Different sensors have been investigated to implement the perception phase and its core component, i.e., obstacle detection, for autonomous flying robots, such as RGB-D cameras, providing depth (D) and colour (RGB) data [5] [6], stereo cameras [7] [8], 2D LiDARs [9] [10] [11]. Data fusion between LiDAR and IMU [12] [13] [14], LiDAR and cameras [15] [16] or even radar and LiDAR [17] [18] have been proposed in recent years. ...
Preprint
Full-text available
Unmanned Aircraft Systems (UAS), more commonly known as drones, are a new and promising air transportation alternative, which is being utilized in various civil, scientific, and military applications. Thanks to versatility, flexibility and ability of reuse, UAS-based missions operate in several contexts, such as search and rescue, disaster assessment, urban traffic monitoring, power line inspection, agriculture crop monitoring and spraying, and in all environments that could be extremely dangerous or impossible for human action. The use of UASs is constantly increasing also within urban areas, where high-level operational safety is required. To ensure safety of autonomous UAS flight operations, Detect and Avoid (DAA) systems are crucial. DAA technology consists of methods for individuation of other aircraft or obstacles and identification of safe paths within the mission scenario. This paper focuses on the characterization of a 3D rotary LiDAR (RS-LiDAR-16 by RoboSense), widely used in SLAM (Simultaneous Localization and Mapping) applications. The potentiality of the sensor for its usage in obstacle detection system for UAS operations, especially during missions conducted in full autonomy or Beyond Visual Line of Sight (B-VLOS), are explored, due to the Lidar's possibility to provide real-time 3-D point clouds mapping the surrounding environment up to 30-m range. Objective-The purpose of this paper is to experimentally evaluate the main operational characteristics of a 3D rotary LiDAR sensor, capable of emitting laser beams along the x-, y-and z-axes, by means of different laboratory tests. This sensor is part of an obstacle detection system being developed by the authors. It will be integrated with other sensors (Radar, Sonar) capable of identifying static and dynamic obstacles and supporting flight path re-planning in case of obstacles detected on the flight trajectory, assuring proper safety levels, as required for General Aviation applications. In this work, the characterization of the sensor RS-LiDAR-16P, by RoboSense, in terms of accuracy, repeatability and stability has been performed. This solid-state hybrid sensor integrates 16 laser/detector pairs mounted in a compact housing and offers a 360°horizontal and 30°v ertical field of view. Indoor Tests-The sensor was positioned on a mobile cart, which was located at several known distances from the reference object (a wall). The figures show the data collection of LiDAR during static test, at 1 meter from the reference. The acquired data were compared to a theoretical Gaussian distribution around the evaluated mean value. Figure 1: Indoor measurements setup. Conclusions-This paper focused on the experimental evaluation of the main operational characteristics of a 3D rotary LiDAR sensor, especially in terms of precision and accuracy. To verify the precision and accuracy of the considered LiDAR sensor, several tests have been carried out, in indoor and outdoor scenarios. The tests were performed in static conditions, positioning the sensor at known distances from a reference object and acquiring high number of samples. The obtained measures indicated good results in terms of sensor's accuracy and precision, in the order of the sub-centimetre within the considered range (1-5 meters for indoor tests, and 10-50 meters for outdoor tests). Future works will include additional data collection sessions at greater distances (up to 150 meters), especially for outdoor tests, where different weather conditions will be also considered (e.g. fog presence, rain, direct sunlight). Furthermore, future additional experiments will be carried out with fixed LiDAR's vertical resolution of two degrees, in order to assess, depending on the distance between LiDAR and target, the target height that can be detected. Experimental Tests-Measurements have been carried out for highlighting the presence of random errors, which cannot be prevented and mitigated, depending on operator reading errors, instrumental errors and environmental conditions. The tests were performed in indoor and outdoor static scenarios. In indoor tests, the measurements were carried out at 1-m steps from 1 up to 5 m from a reference, whereas outdoor tests were conducted at distances between the sensor and different targets from 10 up to 50 m, with 10-m steps. Figure 2: Outdoor measurements setup. The table below shows all data collected during indoor tests. Outdoor Tests-The tests were performed to verify LiDARs' accuracy in external environment conditions. The figures show data collected in the session at 10 m from a reference (a building) and the data distribution histogram. The table below shows all data collected during outdoor tests. Real Dist. [m] Mean [m] Std. Dev. [m] Accura cy [m] Min. Value [m] Max. Value [m] Error [m]
... Lin et al. present an autonomous robot system, FO3D2D (Lin et al. 2020b), that uses a combination of a LiDAR and a camera, where the point clouds, captured from the LiDAR, are used for ground removal and afterwards clustering the point clouds and projecting the clustered point clouds to the 2D camera frame to find out the region of interest (ROI) for obstacles. Recently, Yang et al. (2020) present a comparison between two types of obstacle detection methods, geometric-based and deep learning-based, where the geometric-based method retrieves the obstacles using geometric and morphological operations on the 3D points. Recently, Marius and Florin present a facet-based shaped obstacle detection system in Dulău and Oniga (2021), where point clouds are captured from a 64-layer LiDAR, the ground points are detected using a normal-based geometric approach, and obstacles are detected using an RBNN-based clustering method. ...
Article
Full-text available
An accurate perception with a rapid response is fundamental for any autonomous vehicle to navigate safely. Light detection and ranging (LiDAR) sensors provide an accurate estimation of the surroundings in the form of 3D point clouds. Autonomous vehicles use LiDAR to realize obstacles in the surroundings and feed the information to the control units that guarantee collision avoidance and motion planning. In this work, we propose an obstacle estimation (i.e., detection and tracking) approach for autonomous vehicles or robots that carry a three-dimensional (3D) LiDAR and an inertial measurement unit to navigate in dynamic environments. The success of u-depth and restricted v-depth maps, computed from depth images, for obstacle estimation in the existing literature, influences us to explore the same techniques with LiDAR point clouds. Therefore, the proposed system computes u-depth and restricted v-depth representations from point clouds captured with the 3D LiDAR and estimates long-range obstacles using these multiple depth representations. Obstacle estimation using the proposed u-depth and restricted v-depth representations removes the requirement for some of the high computation modules (e.g., ground plane segmentation and 3D clustering) in the existing obstacle detection approaches from 3D LiDAR point clouds. We track all static and dynamic obstacles until they are on the frontal side of the autonomous vehicle and may create obstructions in the movement. We evaluate the performance of the proposed system on multiple open data sets of ground and aerial vehicles and self-captured simulated data sets. We also evaluate the performance of the proposed system with real-time captured data using ground robots. The proposed method is faster than the state-of-the-art (SoA) methods, though the performance of the proposed method is comparable with the SoA methods in terms of dynamic obstacle detection and estimation of their states.
... Two class of solution has been identified for this processing phase; the first one is based on geometry and computer vision [13], [8]. While the second one leverages on the increased available computational power, employing deep learning techniques to process the 2D grid with convolutional neural network [14], [15]. ...
... Two class of solution has been identified for this processing phase; the first one is based on geometry and computer vision [13], [8]. While the second one leverages on the increased available computational power, employing deep learning techniques to process the 2D grid with convolutional neural network [14], [15]. ...
Preprint
One of the main components of an autonomous vehicle is the obstacle detection pipeline. Most prototypes, both from research and industry, rely on lidars for this task. Pointcloud information from lidar is usually combined with data from cameras and radars, but the backbone of the architecture is mainly based on 3D bounding boxes computed from lidar data. To retrieve an accurate representation, sensors with many planes, e.g., greater than 32 planes, are usually employed. The returned pointcloud is indeed dense and well defined, but high-resolution sensors are still expensive and often require powerful GPUs to be processed. Lidars with fewer planes are cheaper, but the returned data are not dense enough to be processed with state of the art deep learning approaches to retrieve 3D bounding boxes. In this paper, we propose two solutions based on occupancy grid and geometric refinement to retrieve a list of 3D bounding boxes employing lidar with a low number of planes (i.e., 16 and 8 planes). Our solutions have been validated on a custom acquired dataset with accurate ground truth to prove its feasibility and accuracy.
Conference Paper
div class="section abstract"> Image segmentation has historically been a technique for analyzing terrain for military autonomous vehicles. One of the weaknesses of image segmentation from camera data is that it lacks depth information, and it can be affected by environment lighting. Light detection and ranging (LiDAR) is an emerging technology in image segmentation that is able to estimate distances to the objects it detects. One advantage of LiDAR is the ability to gather accurate distances regardless of day, night, shadows, or glare. This study examines LiDAR and camera image segmentation fusion to improve an advanced driver-assistance systems (ADAS) algorithm for off-road autonomous military vehicles. The volume of points generated by LiDAR provides the vehicle with distance and spatial data surrounding the vehicle. Processing these point clouds with semantic segmentation is a computationally intensive process requiring fusion of camera and LiDAR data so that the neural network can process depth and image data simultaneously. We create fused depth images by using a projection method from the LiDAR onto the images to create depth images (RGB-Depth). A neural network is trained to segment the fused data from RELLIS-3D, which is a multi-modal data set for off road robotics. This data set contains both LiDAR point clouds and corresponding RGB images for training the neural network. The labels from the data set are grouped as objects, traversable terrain, non-traversable terrain, and sky to balance underrepresented classes. Results on a modified version of DeepLabv3+ with a ResNet-18 backbone achieves an overall accuracy of 93.989 percent. </div
Article
Understanding of the driving scenario represents a necessary condition for autonomous driving. Within the control routine of an autonomous vehicle, it represents the preliminary step for the motion planning system. Estimation algorithms hence need to handle a considerable number of information coming from multiple sensors, to provide estimates regarding the motion of ego-vehicle and surrounding obstacles. Furthermore, tracking is crucial in obstacles state estimation, because it ensures obstacles recognition during time. This paper presents an integrated algorithm for the estimation of ego-vehicle and obstacles’ positioning and motion along a given road, modeled in curvilinear coordinates. Sensor fusion deals with information coming from two Radars and a Lidar to identify and track obstacles. The algorithm has been validated through experimental tests carried on a prototype of an autonomous vehicle.