Mass data collection systems, both based on images and LiDAR (Light Detection and Ranging), gather field information that can often be visually interpreted. However, this visual inspection is very time and resource intensive, to a point that frequently it is not worth using this technology. For this reason, there is an increasing trend towards the automatization of feature interpretation and identification processes, which are materialized through the creation of algorithms that perform these tasks from images and point clouds. The design of these algorithms consists of: (i) the formulation of accurate abstractions of the goal features (i.e. attributes and descriptors that allow their identification and separation from other features), and (ii) the computation or implementation, involving the transformation of the aforesaid abstractions into a set of computer instructions.
In the last few years, the use of terrestrial laser scanners have increased substantially, especially with respect to mobile terrestrial laser scanners. These systems gather hundreds of thousands of points per second on the objects and surfaces around them while the mobile platform is moving, and they overcome some of the disadvantages of the static terrestrial laser scanners and aerial LiDAR. Usually, mobile laser scanners provide point clouds with centimetric accuracy, with a highly variable distribution and point density (according to distances and incidence angles to the surfaces and objects). The point clouds are affected by the occlusions caused by the perspective obtained from a specific mobile point, which is usually close to the ground.
In this PhD thesis, two data structures for reducing the effects of the large amount of data and the heterogeneity of the point distribution are developed: (i) voxelization, and (ii) line clouds. These two structures allow the creation of reduced and homogenized versions of the point cloud, and have the advantage of being reversible. That is, the transformation of a point cloud into any of these structures produces a simplified version of the data on which the feature detection algorithms can be applied. Afterwards, the simplified structures (voxels and/or lines) are labelled, and the transformation is reversed in order to recover the original points. These points inherit the labels from the simplified structures.
Three algorithms for automatic feature detection are developed based on the two simplification and homogenization structures: (i) an algorithm for automatic detection of pole-like street furniture objects (e.g. traffic signs, traffic lights or lampposts), (ii) an algorithm for automatic surface detection (e.g. façades, walls or panels); and (iii) and algorithm for automatic delineation of road and street edges. For each one of the algorithms, the abstractions, implementation and validation tests are described and specified.
In the validation tests, the algorithm for automatic detection of pole-like objects is able to identify more than 92% of the goals in the test point clouds, with actual poles accounting for 84% of the detected features. For the second algorithm, the results show that more than 90% of the points belonging to each one of the 27 test surfaces are assigned to them, and 99.9% of the points assigned to each surface are correct. The third algorithm is able to delineate more than 97% of the test road surface, and 98% of the surface delineated by the algorithm belongs to the actual road. The three algorithms overcome many of the drawbacks of previous algorithms, and they surpass their performance in terms of detection and success rates.
Figures - uploaded by
Carlos CaboAuthor contentAll figure content in this area was uploaded by Carlos Cabo
Content may be subject to copyright.