Article

Investigating laser scanner accuracy

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Questions concerning the quality and accuracy of the recorded 3D points of laser scanners recei ve little attention. In a resear ch project, i3mainz has installed a number of different test target s that allow an investigation in the quality of points recorded by laser scanners and the geometric models derived from the point clouds. The standardized tests also allow a comparison between instruments of many different manufacturers for the first time. Seven instruments have been tested, more tests are already sche duled for the near future.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These guidelines were followed in the present study. The article by Boehler et al. [25] also mentions the use of spheres, plates, and targets for accuracy determination in a laboratory setting. ...
... The first consideration pertains to the choice of a laboratory environment with controlled climatic conditions, similar to the approach proposed in the articles by Beraldin et al. [5], Wang et al. [6], Muralikrishnan et al. [24], and Boehler et al. [25]. This prevents environmental uncertainties from significantly influencing the measurement uncertainty calculations in the laser scanner accuracy assessment. ...
... The third consideration relates to the use of sphere centers as a reference for determining measurements between center-to-center distances, as described by Wang et al. [6], Muralikrishnan et al. [24], and Boehler et al. [25]. ...
Article
Full-text available
The dimensional accuracy of a laser scanner has been extensively evaluated using various measurement methods and diverse reference standards. This study specifically focuses on two key considerations. Firstly, it assesses the dimensional accuracy of the laser scanner by employing another laser scanner, a handheld scanner, as the reference measurement method. Secondly, the study involves the use of three spheres fixed on each wall in both coplanar and non-coplanar positions within a laboratory room at SENAI ISI-SIM. The primary objective is to determine the dimensional accuracy between the centers of the coplanar and non-coplanar spheres up to 10 m. The comparison includes measurement uncertainties, as per ISO GUM standards, obtained using the laser scanner in a laboratory setting with controlled temperature and humidity. Analyzing non-coplanar dimensional accuracy enhances our understanding of the metrological performance of the laser scanner, particularly when assessing the dimensions of objects positioned randomly within a scanning scene.
... Unfortunately, TLS does not inherently capture specific features like the gaps or openings between blocks, which represent our study's focus. These features become discernible post the modeling of the extracted scanned point clouds [7]. ...
... The quality and precision of point cloud data are influenced by several factors, such as instrumental mechanisms, environmental conditions, object surface properties, scan geometry, and object geometry [21][22][23]. Boehler et al. [7] present a comprehensive study on the accuracy of 3D laser scanners, comparing various models through standardized tests to assess the quality of the data that they produce. The effects of the scan geometry, focusing on the laser incidence angle and the range of the beam, have been intensively investigated [24][25][26][27][28]. ...
... Primarily, the exclusive use of the BLK360 scanner may limit the applicability of our findings across various static TLS devices. It is worth noting that there can be significant differences in the quality of data scanned by various 3D laser scanners, as reported by Boehler et al. [7]. As detailed by Petrie and Toth [39] in their comprehensive comparison of TLS systems, different scanners vary in technical specifications, which can potentially affect data accuracy and precision. ...
Article
Full-text available
This study investigates the use of terrestrial laser scanning (TLS) in urban excavation sites, focusing on enhancing ground deformation detection by precisely identifying opening geometries, such as gaps between pavement blocks. The accuracy of TLS data, affected by equipment specifications, environmental conditions, and scanning geometry, is closely examined, especially with regard to the detection of openings between blocks. The experimental setup, employing the BLK360 scanner, aimed to mimic real-world paving situations with varied opening widths, allowing an in-depth analysis of how factors related to scan geometry, such as incidence angles and opening orientations, influence detection capabilities. Our examination of various factors and detection levels reveals the importance of the opening width and orientation in identifying block openings. We discovered the crucial role of the opening width, where larger openings facilitate detection in 2D cross-sections. The overall density of the point cloud was more significant than localized variations. Among geometric factors, the orientation of the local object geometry was more impactful than the incidence angle. Increasing the number of laser beam points within an opening did not necessarily improve detection, but beams crossing the secondary edge were vital. Our findings highlight that larger openings and greater overall point cloud densities markedly improve detection levels, whereas the orientation of local geometry is more critical than the incidence angle. The study also discusses the limitations of using a single BLK360 scanner and the subtle effects of scanning geometry on data accuracy, providing a thorough understanding of the factors that influence TLS data accuracy and reliability in monitoring urban excavations.
... In addition to that, the temperature of the scanner may be higher than the ambient temperature as the scanning time increase; this may cause some errors (Reshetyuk, 2009). Also, the rain and dust could influence the measurements (Boehler et al., 2003;Reshetyuk, 2009). Although the terrestrial laser scanner is considered to be active sensors (working with the same efficiency in bright day and dark night), some researchers (Boehler et al., 2003;Pfeifer et al., 2007;Voisin et al., 2007) have reported that ambient lighting can affect the range measurements accuracy. ...
... Also, the rain and dust could influence the measurements (Boehler et al., 2003;Reshetyuk, 2009). Although the terrestrial laser scanner is considered to be active sensors (working with the same efficiency in bright day and dark night), some researchers (Boehler et al., 2003;Pfeifer et al., 2007;Voisin et al., 2007) have reported that ambient lighting can affect the range measurements accuracy. ...
... The object's surface reflectivity affects the returned laser signals power density and consequently the signal to noise ratio. The effects of surface properties which can be described for example using; roughness, albedo, directional hemispherical reflectance and optical properties on the accuracy of the measurements have been investigated in some researches (Boehler et al., 2003;Hoefle and Pfeifer, 2007;Pfeifer et al., 2008). ...
Article
Full-text available
For 3D modelling of any object in close range, two common techniques are employed, as the image based and the range based techniques .In the image based technique, the orientation parameters of the used tool (camera, sensor) should be obtained first and then some measurements followed by certain computations should be done to recover interest points on the required object. On the other side, the range based technique (Terrestrial laser scanner) can obtain 3D point clouds recovering the object directly without more measurements or processing. To improve the final accuracy, some sources of errors should be avoided (Instrumental, Environmental and object related) and others should be controlled (geometric planning). The geometric planning is related to the position of the scanner related to the object, which can be controlled by adjusting the angle and the range of the transmitted beam. So, the current research paper will focus on the effect of various ranges on the point cloud accuracy. In this context, the required experiment is made using different ranges from 2.58m to 25.0m. A designed multi-parts target has been manufactured and it is equipped with a metal frame used to properly install and move (or rotate) all of its parts. This designed target is suitable for use with different ranges to achieving the purpose from this research. The output results indicated that, for both short tested distances from 2.58m to 7.0m and long tested distances from 15.0m to 25.0m, it is preferable to scan the object from closer distance than longer tested one.
... The quality of point clouds obtained using LiDAR devices is influenced by many factors, e.g. laser scanning technique and features of the scanned object (Boehler et al. 2003). In addition to these factors, other sources of error can occur when scanning in forest conditions, such as unfavourable weather (wind and precipitation), occlusion by plants and large scanning distances due to the size of the objects. ...
... In natural environments, one must expect wind, humidity and dust particles which all might cause different artefacts (Boehler et al. 2003;Krok et al. 2020). Even slight wind, which is not even detectable from below the canopy, might move branches in the upper crowns (Oliver 1971). ...
Article
Full-text available
Quantitative structural models (QSMs) are frequently used to simplify single tree point clouds obtained by terrestrial laser scanning (TLS). QSMs use geometric primitives to derive topological and volumetric information about trees. Previous studies have shown a high agreement between TLS and QSM total volume estimates alongside field measured data for whole trees. Although already broadly applied, the uncertainties of the combination of TLS and QSM modelling are still largely unexplored. In our study, we investigated the effect of scanning distance on length and volume estimates of branches when deriving QSMs from TLS data. We scanned ten European beech (Fagus sylvatica L.) branches with an average length of 2.6 m. The branches were scanned from distances ranging from 5 to 45 m at step intervals of 5 m from three scan positions each. Twelve close-range scans were performed as a benchmark. For each distance and branch, QSMs were derived. We found that with increasing distance, the point cloud density and the cumulative length of the reconstructed branches decreased, whereas individual volumes increased. Dependent on the QSM hyperparameters, at a scanning distance of 45 m, cumulative branch length was on average underestimated by − 75%, while branch volume was overestimated by up to + 539%. We assume that the high deviations are related to point cloud quality. As the scanning distance increases, the size of the individual laser footprints and the distances between them increase, making it more difficult to fully capture small branches and to adjust suitable QSMs.
... Furthermore, a study focusing on the laser rangefinder and angular resolution components of the system leads to a more controlled measurement setup and allows to eliminate further (systematic) errors due to the multi-sensor system. The resolution capability is well studied for 3D laser scanners and with a variety of approaches [23][24][25], that are mostly based on a dedicated measurement object. Since the investigated laser scanner is measuring only 2D profiles, the approach for the determination of the resolution capability has to be adapted to the 2D case and is presented in the methodology. ...
... Within this study, we investigate the resolution capability in angular direction. Typically, the resolution capability is analyzed using a specific object with two parallel planes described as foreground and background [23][24][25]. By scanning this object with a laser scanner, the foreground and background are often covered by the same laser beam footprint [34], which might lead to the so-called "mixed pixel" effect. ...
Article
Due to recent improvements in sensor technology, UAV-based laser scanning is nowadays used in more and more applications like topographic surveying or forestry. The quality of the scanning result, a georeferenced 3D point cloud, mainly depends on errors coming from the trajectory estimation, the system calibration and the laser scanner itself. Due to the combined propagation of errors into the point cloud, the individual contribution is difficult to assess. Therefore, we propose an entire investigation of the scan characteristics of a 2D laser scanner without the use of the other sensors included in the system. The derived parameters include the range precision, the rangefinder offset as part of the range accuracy, the angular resolution capability and the multi-target capability of the RIEGL miniVUX-2UAV. The range precision is derived from amplitude values by a stochastic model, with observations fitting a theoretical model very well. The resolution capability in the angular direction is about twice the laser beam footprint size and therefore increases linearly for larger distances. Further, a new approach with the corresponding methodology for the investigation of multi-target capability is presented. The minimum distance between two targets to appear as separated echoes within a single laser beam is about 1.6 m and inliers within the measurement precision occur from 1.9 m separation distance. The scan attributes amplitude and deviation, which are computed during the online waveform processing, show a clear systematic relation to the range precision, also in cases of multiple echoes.
... Since the strength and impact of the interfering quantities vary depending on whether the uncertainty evaluation is conducted in a laboratory [24][25][26][27] or in the field [26][27][28]34], different levels of uncertainty may be encountered. ...
... However, it is difficult to obtain quantitative information for variability, regarding the particular application of interest, due to the high number of uncertainty causes that might exist, depending on the specific measurement environment and methods [6,28]. The variability causes are particularly numerous in the case of surveys performed in post-earthquake emergency conditions, due to the numerous and stringent limitations on the measurement procedure: the need to operate in the shortest amount of time, the impossibility of reproducing optimal configurations due to confined spaces, continuous movement of people and vehicles, the possibility of impacts or voluntary movements of the instrument, interposition of people and means between the instrument and the building, and critical environmental conditions. ...
... 13 Example of a sphere fit to points ...
... 13 show the direction and magnitude of errors (scaled by a factor of 5, for ...
Thesis
Full-text available
In archaeology, it is useful to document the shape of features of interest. There are many three-dimensional measurement technologies available that can help accomplish this task. An error model for a handheld 3D scanner called the DPI-7 was created. This error model reduced the errors in the in-plane directions by up to 59%. The levels of precision in two technologies, terrestrial laser scanning and computer vision assisted photogrammetry, were determined through the simulation of observations in a virtual environment. It was found that terrestrial laser scanning point observations had a standard deviation (in the direction of least precision) of 6mm, while photogrammetry could achieve a value of 10mm. The point cloud data from the scans of an excavation in the Canadian arctic were used to create a detailed and coloured visual model of the site, and was subsequently used in a virtual reality visualization of the site in question.
... Specification [20] specifies four different levels of LOA and LOD depending on the purpose of the PCD. The LOA is defined as the tolerance of the positioning accuracy of each point in the PCD, which is mostly affected by these four factors: (1) target properties; (2) atmospheric conditions; (3) TLS mechanism; and (4) scanning geometry (e.g., range and the incidence angle) [9,16,29,30]. The first three factors are not subjective to an inspector, while the scanning geometry is related to the location of the scanner, which can be controlled and optimized. ...
... Note that ρ and α can be determined by the scanner location. Since the scanner location has an effect on both LOA and LOD [29][30][31][32][33], the LOA and LOD criteria should be used as the constraints when optimizing scanner locations. The registrability criteria are used to ensure a pair of scans to be aligned with a unified PCD. ...
Article
The acquisition of point cloud data from a space frame using terrestrial laser scanning is usually affected by many occluding components and site conditions and therefore needs to achieve optimal priori planning, which is handled as the planning for scanning (P4S) problem. This paper describes a three-dimensional model-based P4S approach for space frame structures, where a space modeling solution is employed to simulate the scanning target and environment. The P4S problem modeling is used to define the visibility analysis and constraints. Lastly, a two-phase optimization is proposed to solve the P4S problem and compared with a weighted greedy algorithm. Experiments were conducted on a full-scale space frame to validate the proposed approach.
... Several authors have studied the pronounced effect and also proposed algorithms to detect or filter the mixed pixels from the obtained point clouds, Wang et al. (2016Wang et al. ( , 2019, Tang et al. (2007), Hebert and Krotkov (1992). The analysis of various indicators of laser scanner accuracy on specially designed targets including the influence of mixed pixels on effective resolution and edge effects is presented in Boehler et al. (2003). ...
... The mixed pixel analytical model is extended to compute the width of the transition region between the two planar surfaces involved where the measurements cannot be resolved independently for one of the targets, as shown in the right-most sketch in Figure 3.35. This analytical RC model complements the previous empirical investigations, Schmitz et al. (2020Schmitz et al. ( , 2021, Boehler et al. (2003), Lichti and Jamtsho (2006), Lichti (2004), Huxhagen et al. (2011). ...
Thesis
Full-text available
Reflectorless electronic distance measurement (RL-EDM) based on optical signals, transmitted from the instrument and reflected by natural surfaces, enables fast, accurate 3d mapping and digitization of the environment using terrestrial laser scanning (TLS). The measurements are naturally affected by the instrumental imperfections, atmospheric effects, and surface and material effects that primarily involve the geometry and physical properties, roughness, penetration, and anisotropic reflections of the scanned surfaces. This consequently limits the achievable accuracy or makes it challenging to make reliable accuracy predictions for applications where this is needed. In practice, the surface-related effects are the most influential ones for scanning distance of tens to hundreds of meters. In this thesis, a numeric model for the light detection and ranging (LiDAR) measurement process has been developed and implemented for numerical simulations to understand and study the surface-related effects and their influence on the accuracy of the TLS measurements. The numeric simulations follow a geometrical optics approach, assuming a fundamental Gaussian laser beam profile. The measurement process involves a continuous wave (CW) phase-based RL-EDM with I/Q-demodulation or a pulsed wave (PW) based RL-EDM with pulse detection technique. The main underlying assumptions involve the beam parameters, the surface topography within the measurement footprint, the material properties within the measurement footprint, the geometric configuration, which involves the distance and angle of incidence, noise at the detector, and the measurement principle. The simulations are extended from RL-EDM to a 3d laser scanning process by deflecting the measurement beam into incrementally changed spatial directions. Numeric simulations are used herein to investigate the effects of the angle of incidence, surface curvature, mixed pixels, and surface roughness, approximating the surface geometry by a triangular irregular network (TIN) of high spatial resolution. The surfaces are assumed to be perfectly diffuse; an absolute reflectance and a Lambertian scattering model are allocated to each triangle in the network. The LiDAR equation is implemented to compute the power received at the detection unit. The simulation outcomes are compared to the corresponding experimental results obtained with scanning experiments using phase-based laser scanners (mostly Z+F Imager 5016 and Faro X330) both indoors and outdoors. In addition, experimental investigations of a few selected surface specimens, namely, spruce wood, beech wood, and concrete using a Z+F 5016 scanner, are carried out to understand the impact of the underlying surface-related effects on the deviations and noise of the laser scanning measured points. The experimental studies help to understand the plausible reasons for the discrepancies between the simulated and real scan results. As an additional contribution, a simple procedure for retrieving and deriving sufficient approximations of the beam parameters of a phase-based laser scanner experimentally is developed. They are necessary for realistic simulations.
... Beraldin et al. [2] listed some sources of measurement uncertainty in the use of three-dimensional scanning instruments. Systematic errors in this equipment have been observed by some authors [2,25,26] and may increase in some cases, such as when scanning an object with many discontinuities, prominent corners, or scanning against a high-contrast surface. ...
... Another relevant aspect concerns the systematic errors observed by some authors [2,25,26,31] in this equipment. According to them, systematic errors can increase in cases such as in the digitalization of an object with many discontinuities, prominent corners, or scanning against a high contrast surface. ...
Article
Considering the recent use of three-dimensional digitalization equipment in ballistic vest tests to characterize trauma caused by projectiles, this study carried out a performance analysis of a 3D structured light scanner to measure trauma depths. Artifacts were manufactured and digitized by a 3D scanner and by an articulated arm coordinate measuring machine, which provided the reference values. A process was developed for estimating the depth from the point clouds, one of the main contributions of this work. Filtering and segmentation of the point clouds allowed the extraction of the trauma depths for later comparison. The systematic errors reported in the literature for structured light scanning equipment were confirmed, and it was statistically verified that the critical trauma measurement values are correctly measured, with a bias of 0.11 mm and standard measurement uncertainty of 0.12 mm. Finally, a real ballistic vest test shows the applicability in a practical scenario.
... Ranging tests have also been reported in more controlled environments against reference instruments of higher accuracy. For example, Boehler et al. [26] used an interferometer to evaluate a TLS over a short range (<10 m). Ingensand et al. [27] reported on a range of measurements on a TLS performed on a 52 m calibration track, where the reference values were established using an interferometer. ...
Article
Full-text available
Laser trackers (LTs) are dimensional measurement instruments commonly employed in the manufacture and assembly of large structures. Terrestrial laser scanners (TLSs) are a related class of dimensional measurement instruments more commonly employed in surveying, reverse engineering, and forensics. Commercially available LTs typically have measurement ranges of up to 80 m. The measurement ranges of TLSs vary from about 50 m to several hundred meters, with some extending as far as several kilometers. It is difficult, if not impossible, to construct long reference lengths to evaluate the ranging performances of these instruments over that distance. In this context, we explore the use of stitching errors (i.e., stacking errors in adjoining or overlapping short lengths) and stitching lengths (i.e., constructing long reference lengths from multiple positions of a reference instrument by registration) to evaluate these instruments. Through experimental data and a discussion on uncertainty, we show that stitching is indeed a viable option to evaluate the ranging performances of LTs and TLSs.
... One main factor is the loss of measurements when the A2M12 measures the distance to the convex corners of the reference layers. When the pulsed laser emitted from the LiDAR reflects off the layer, the laser does not return to the sensor's receiver due to the angle of incidence upon the convex corner (Boehler et al. 2003). This scenario results in the loss of width and height measurements for the corners of the layer, causing the width measurement to return as a smaller value. ...
... The effects of the categories mentioned above have been intensely studied for survey-grade scanners (Boehler et al., 2003, Voegtle et al., 2008, Hartmann et al., 2023. Modern commercially available terrestrial LiDAR instruments are individually calibrated and often compensate for these effects. ...
Article
Full-text available
Terrestrial LiDAR is an established method for 3D data acquisition in close-range applications. A new category of low-cost LiDAR sensors for autonomous driving has become available with similar specifications. However, these new sensors lack the guarantee of survey-grade performance. Initial experiments have broadly confirmed the specification of one such low-cost sensor but have also raised issues with the radiometric behaviour. This study investigates through practical experiments how the intensity information of the Livox-Mid40 laser scanner is influenced by time, reflectance, distance and angle of incidence. The quantitative analysis of the experiments shows an expected relationship between surface reflectance and recorded intensity. However, the result indicate that intensity can significantly influence the distance measurement at very close ranges.
... Performing the same tasks with multiple scanners, under the same environmental conditions and set up, allows for meaningful and insightful comparisons. One of the first explorations of laser scanner accuracy came from [6], where TLS from different manufacturers were compared by scanning planar surfaces with varying reflectivity and range. A more recent example where multiple terrestrial laser scanners were examined can be found in [7], where various tests were performed to explore the distance accuracy of five similar TLS. ...
Article
Full-text available
Terrestrial laser scanners are powerful measurement devices commonly used for 3D modelling tasks generating large volumes of data with fast acquisition as a first priority. However, these scanners can alternatively be used to produce near real-time, engineering quality spatial data concerning the changing state of manufactured components. This paper provides a comprehensive analysis of two terrestrial laser scanners capturing aerospace materials and components, and their associated quality measures. In order to explore the limitations of the tested TLS instruments, a mechanical jig was designed incorporating both a rotation and translation stage. This study involved three elements of a point cloud processing workflow: data capture, registration and feature extraction. Sphere-based 7DoF registration is applied using two different commercially available software packages with varying levels of user control. To analyse the quality of the registration, control points extracted from captured point clouds were compared to nominal values measured using a laser tracker. The quality of the registration was consistent, with differences kept between 0.4 mm and 0.6 mm. To evaluate the quality of the captured point clouds, two different tests were conducted. This included planar fit tests on an aluminium drilling template, and sphere fitting tests on white 1.5” spherical targets in magnetic nests. One half of the aluminium drilling template was coated with matte spray to reduce erroneous laser reflections. Finally, the registered point clouds were input to a developed algorithm which automatically extracted drilling holes from the drilling template. Previous scanning work performed on aerospace materials showed evidence of optical rattling caused by high intensity reflections from the interior holes in a drilling template. Further exploration showed that the amount of optical rattle varies systematically with incidence angle. This work demonstrates a systematic offset in the location of extracted hole centres in the drilling template. This offset is dependent on laser incidence angle, and can therefore be accounted for when locating manufacturing components from a known scanning position.
... After conditioning the surfaces of both parts with white painting to reduce reflectivity [31], several point markers were employed in the scanning process to improve accuracy. The optimal configuration, identified after a trial and error procedure, was adopted for both manufactured parts (see Figs. 9 and 10). ...
Article
Full-text available
This study aims to evaluate the advantages and criticalities of applying additive manufacturing to produce climbing holds replicating real rocky surfaces. A sample of a rocky surface has been reproduced with a budget-friendly 3D scanner exploiting structured light and made in additive manufacturing. The methodology is designed to build a high-fidelity replica of the rocky surface using only minor geometry modifications to convert a 2D triangulated surface into a 3D watertight model optimised for additive manufacturing. In addition, the research uses a novel design and uncertainty estimation approach. The proposed methodology proved capable of replicating a rocky sample with sub-millimetre accuracy, which is more realistic than conventional screw-on plastic holds currently used in climbing gyms. The advantages can be addressed in terms of customisation, manufacturing cost and time reduction that could lead to real outdoor climbing experiences in indoor environments by coupling additive manufacturing techniques and reverse engineering (RE). However, operating the scanner in a rocky environment and the considerable size of the climbing routes suggest that further research is needed to extend the proposed methodology to real case studies. Further analysis should focus on selecting the best material and additive manufacturing technology to produce structural components for climbing environments.
... Additionally, materials with low reflectivity exhibit a shorter ranging distance compared to those with high reflectivity. Boehler et al. [16] carried out experimental tests on multiple laser scanners to evaluate their quality and measurement errors. Utilizing a range of test targets, such as planar objects with diverse reflectivities and white spheres, the researchers compared the measurement errors and point cloud noise of the scanners at varying distances and with different surface materials. ...
Article
Full-text available
Three-dimensional laser scanning has emerged as a prevalent measurement method in numerous high-precision applications, and the precision of the obtained data is closely related to the intensity information. Comprehending the association between intensity and point cloud accuracy facilitates scanner performance assessment, optimization of data acquisition strategies, and evaluation of point cloud precision, thereby ensuring data reliability for high-precision applications. In this study, we investigated the correlation between point cloud accuracy and two distinct types of intensity information. In addition, we presented methods for assessing point cloud accuracy using these two forms of intensity information, along with their applicable scopes. By examining the percentage intensity, we analyzed the reflectance properties of the scanned object’s surface employing the Lambertian model. Our findings indicate that the Lambertian circle fitting radius is inversely correlated with the scanner’s ranging error at a constant scanning distance. Experimental outcomes substantiate that modifying the surface characteristics of the object enables the attainment of higher-precision point cloud data. By constructing a model associating the raw reflectance intensity with ranging errors, we developed a single-point error ellipsoid model to assess the accuracy of individual points within the point cloud. The experiments revealed that the ranging error model based on the raw intensity is solely applicable to point cloud data unaffected by specular reflectance properties. Moreover, the devised single-point error ellipsoid model accurately evaluates the measurement error of individual points. Both analytical methods can be utilized to evaluate the performance of the scanner as well as the accuracy of the acquired point cloud data, providing reliable data support for various high-precision applications.
... The quantification of the instrumental error requires an investigation of the mechanical, electronic and optical sources of the error for calibration and correction [30]. In addition, it is also affected by factors such as incidence angle, scan distance, atmospheric conditions, and surface characteristics of the observed objects [6,29,[31][32][33][34][35][36][37][38][39]. Luckily, most TLS scanners available, such as the one used in this study, have a very high measurement accuracy. ...
Article
Full-text available
Retaining walls are often built to prevent excessive lateral movements of the ground surrounding an excavation site. During an excavation, failure of retaining walls could cause catastrophic accidents and hence their lateral deformations are monitored regularly. Laser scanning can rapidly acquire the spatial data of a relatively large area at fine spatial resolutions, which is ideal for monitoring retaining walls' deformations. This paper attempts to apply terrestrial laser scanning (TLS) to measure lateral deformations of a soil mixing retaining wall at an ongoing excavation site. Reference measurements by total station and inclinometer were also conducted to verify those from the laser scanning. The deformations derived using laser scanning data were consistent with the reference measurements at the top part of the retaining wall (mainly the ring beam of the wall). This research also shows that the multi-scale-model-to-model method was the most accurate deformation estimation method on the research data.
... • automatic measurement signalised control points. For this, the dedicated Z+F LaserControl software was used [21], • point registration based on previously measured points using the target-based method [22,23] • filtering of points according to the intensity of the reflected laser beam, whereby it was possible to eliminate all points on which reflection or scattering of the laser beam occurred; a fragment of objects on which refraction of the beam occurs at the edges (so-called mixededge effect), etc. [5,[24][25][26] • on each of the scans, test areas were chosen and exported to .las format for further quality assessment ...
Article
Full-text available
The development of high-resolution geometric documentation plays a fundamental role in the conservation works and managing cultural heritage objects. Nowadays, orthoimages are increasingly being used for these purposes, as it is a combination of geometric accuracy (derived, among other things, from terrestrial laser scanning data and/or photogrammetric methods) and visual quality (based on information from images). The objective of this article is present the methodology of orthoimages generation based on the integration of data acquired by the Z+F 5006h terrestrial laser scanner and the Canon EOS 5D Mark II digital camera. In this investigation, the methodology of high-resolution and high-quality orthoimages was presented. The modified version of the Structure-from-Motion processed the data, and MultiView Stereo approaches based on integrating TLS data with close-range images and extended data analysis. The primary objective of the performed works presented in this paper was to optimise the quality of the high-resolution orthoimages
... For accurate inspection and defect detection, a precise 3d scanning method is required. Recently, some studies have closed the printing loop by employing state-of-the-art 3D reproduction methods, such as laser triangulation [5][6][7] or fringe projection [8][9][10]. Shape-from-focus (SFF) [11,12], also known as focus variation microscopy (FV), is a method that is also capable of measuring at µm level accuracy and precision [13,14]. ...
Article
Full-text available
In 3D printing, as in other manufacturing processes, there is a push for zero-defect manufacturing, mainly to avoid waste. To evaluate the quality of the printed parts during the printing process, an accurate 3D measurement method is required. By scanning the part during the buildup, potential nonconformities to tolerances can be detected early on and the printing process could be adjusted to avoid scrapping the part. Out of many, shape-from-focus, is an accurate method for recovering 3D shapes from objects. However, the state-of-the-art implementation of the method requires the object to be stationary during a measurement. This does not reconcile with the nature of 3D printing, where continuous motion is required for the manufacturing process. This research presents a novel methodology that allows shape-from-focus to be used in a continuous scanning motion, thus making it possible to apply it to the 3D manufacturing process. By controlling the camera trigger and a tunable lens with synchronous signals, a stack of images can be created while the camera or the object is in motion. These images can be re-aligned and then used to create a 3D depth image. The impact on the quality of the 3D measurement was tested by analytically comparing the quality of a scan using the traditional stationary method and of the proposed method to a known reference. The results demonstrate a 1.22% degradation in the measurement error.
... The results are shown in Fig. 13. Fig. 13 (a) depicts the so-called Boehler star [66] used as a target, which is a three-dimensional representation of the Siemens star used to determine the spatial resolution of depth sensors. The VLC module is used to acquire raw images with custom delays, and the exposure time is maintained at 5 ms. ...
Article
Full-text available
3D Time-of-Flight (ToF) cameras have recently received a lot of attention due to their wide range of applications. Despite remarkable advancements in ToF imaging, state-of-the-art ToF cameras are still afflicted by the power hungriness of their illumination sources. To tackle this problem, we exploited existing lighting infrastructure, that ensures the ubiquitous presence of modulated light sources in indoor spaces, which serve as opportunity illuminators. We explored the bistatic geometry for passive imaging using the pulse-based ToF approach. Our work is inspired by the recently introduced visible light communication (VLC) or light-fidelity (Li-Fi) infrastructure. VLC allows the infrastructure to provide indoor simultaneous illumination, communication, and sensing (SICS). To this end, we designed a bistatic geometry for the purpose of attaining passive 3D imaging. Such capabilities are achieved by exploiting the pulse shape of the autocorrelation function of real optical signals generated by VLC/Li-Fi modules (e. g., OpenVLC and LiFiMAX). We demonstrated passive imaging by means of matched filtering. In this work, we also studied different sampling strategies in the time shift domain, including uniform, random, and sparse rulers, which is another step forward towards preserving high depth accuracy with a minimal number of measurements. The proposed methodology achieved successful depth reconstruction with negligible root-mean-square-error (RMSE) for the low signal-to-noise ratio (SNR) of the measurements. Parametric models such as Gaussian and sum-of-sines are used to characterize the cross-correlation functions and allow for robust parametric depth retrieval from a few measurements. Moreover, we attained 20-mm worst-case error for a target at 25 cm. The experiment proved that the bistatic passive depth reconstruction is feasible.
... We assessed the performance of the OS1-64 against the VZ-6000 in a standardized test setup which is based on Boehler et al. (2003). The first frame of each OS1-64 measurement was used for the test. ...
Article
Full-text available
We propose a newly developed modular MObile LIdar SENsor System (MOLISENS) to enable new applications for small industrial lidar (light detection and ranging) sensors. The stand-alone modular setup supports both monitoring of dynamic processes and mobile mapping applications based on SLAM (Simultaneous Localization and Mapping) algorithms. The main objective of MOLISENS is to exploit newly emerging perception sensor technologies developed for the automotive industry for geoscientific applications. However, MOLISENS can also be used for other application areas, such as 3D mapping of buildings or vehicle-independent data collection for sensor performance assessment and sensor modeling. Compared to TLSs, small industrial lidar sensors provide advantages in terms of size (on the order of 10 cm), weight (on the order of 1 kg or less), price (typically between EUR 5000 and 10 000), robustness (typical protection class of IP68), frame rates (typically 10-20 Hz), and eye safety class (typically 1). For these reasons, small industrial lidar systems can provide a very useful complement to currently used TLS (terrestrial laser scanner) systems that have their strengths in range and accuracy performance. The MOLISENS hardware setup consists of a sensor unit, a data logger, and a battery pack to support stand-alone and mobile applications. The sensor unit includes the small industrial lidar Ouster OS1-64 Gen1, a ublox multi-band active GNSS (Global Navigation Satellite System) with the possibility for RTK (real-time kinematic), and a nine-axis Xsens IMU (inertial measurement unit). Special emphasis was put on the robustness of the individual components of MOLISENS to support operations in rough field and adverse weather conditions. The sensor unit has a standard tripod thread for easy mounting on various platforms. The current setup of MOLISENS has a horizontal field of view of 360 • , a vertical field of view with a 45 • opening angle, a range of 120 m, a spatial resolution of a few centimeters , and a temporal resolution of 10-20 Hz. To evaluate the performance of MOLISENS, we present a comparison between the integrated small industrial lidar Ouster OS1-64 and the state-of-the-art high-accuracy and high-precision TLS Riegl VZ-6000 in a set of controlled experimental setups. We then apply the small industrial lidar Ouster OS1-64 in several real-world settings. The mobile mapping application of MOLISENS has been tested under various conditions, and results are shown from two surveys in the Lurgrotte cave system in Austria and a glacier cave in Longyearbreen on Svalbard.
... Presently, it is possible to develop this Paleomimetic design process due to recent technological advances that provide (i) high fidelity in the capture of past models, (ii) greater reliability of results from numerical simulations, and (iii) high reproducibility of complex computer-aided design (CAD) models with digital manufacturing techniques [68,69]. Highfidelity 3D CAD models of fossilised structures can be acquired using various techniques, such as structure-from-motion (SfM) digital photogrammetry, 3D laser scanning and computed tomography, or microtomography (µCT) [11,[70][71][72]. Each technique differs in geometric and volumetric resolutions of the model provided; thus, the choice of a technique greatly varies in relation to the investigation necessary. ...
Article
Full-text available
In biomimetic design, functional systems, principles, and processes observed in nature are used for the development of innovative technical systems. The research on functional features is often carried out without giving importance to the generative mechanism behind them: evolution. To deeply understand and evaluate the meaning of functional morphologies, integrative structures, and processes, it is imperative to not only describe, analyse, and test their behaviour, but also to understand the evolutionary history, constraints, and interactions that led to these features. The discipline of palaeontology and its approach can considerably improve the efficiency of biomimetic transfer by analogy of function; additionally, this discipline, as well as biology, can contribute to the development of new shapes, textures, structures, and functional models for productive and generative processes useful in the improvement of designs. Based on the available literature, the present review aims to exhibit the potential contribution that palaeontology can offer to biomimetic processes, integrating specific methodologies and knowledge in a typical biomimetic design approach, as well as laying the foundation for a biomimetic design inspired by extinct species and evolutionary processes: Paleomimetics. A state of the art, definition, method, and tools are provided, and fossil entities are presented as potential role models for technical transfer solutions.
... According to the working principles used to return the points coordinates, laser scanners can be classified respectively in range scanners (phaseshift and Time-of-Flight, TOF) and triangulation scanners (Figure 3-7). A comparison between the different methods in terms of range of applications and grade of accuracy has been presented in (Guidi and Remondino, 2012;Boehler et al., 2003). ...
Thesis
Cette thèse propose une approche méthodologique intégrée pour le transfert, la récupération et le partage d'annotations sémantiques, à partir de modèles numériques 2D ou 3D du patrimoine culturel, et en combinant techniques d'Intelligence Artificielle, environnements H-BIM et plateformes d'annotation collaboratives et basées sur la réalité (reality-based) telles qu’Aïoli (aioli.cloud). La méthodologie proposée est validée sur des études de cas significatives du patrimoine architectural français et italien, comme la cathédrale Notre-Dame de Paris et la Chartreuse de Pise.Dans la documentation numérique du patrimoine culturel et architectural, la pluralité des modes de représentation existants, même issus du balayage laser ou de la photogrammétrie, est source de dispersion des données. Dans ce contexte, l'annotation sémantique, entendue comme l'association d'informations (sémantiques) liées à la connaissance à des données numériques purement métriques, est indispensable pour permettre l'interprétation et le partage des informations numériques relatives au patrimoine culturel, tant en format 2D (images, ortho-photos, dessins etc.) qu'en format 3D (nuages de points, maillages, etc.).Considérant les systèmes de Modélisation des Informations de la Construction (H-BIM) et les plateformes d'annotation collaborative de modèles telles qu’Aïoli, ce travail vise à définir une approche méthodologique qui autorise le transfert, la récupération et l'échange d'annotations sémantiques sur des modèles numériques 2D/3D du patrimoine culturel. L'approche proposée repose sur les trois phases suivantes:i. Segmentation (classification) sémantique semi-automatique de données de relevé numérique par le biais d'algorithmes d'intelligence artificielle (IA).ii. Transfert d'annotations 2D/3D.iii. Reconstruction H-BIM, structuration sémantique et insertion d'informations localisées.Dans le détail, la première phase consiste à appliquer des algorithmes d'IA pour permettre l'enrichissement sémantique des données numériques. De façon semi-automatique, les données brutes issues du relevé (nuages de points, images, maillages) sont classées et interprétées afin de reconnaître composantes architecturales et typologiques récurrentes d'un bâtiment, ou bien afin de restituer l'état de dégradation des surfaces, de cartographier les matériaux, etc.Ensuite, les informations obtenues sont transférées et propagées vers plusieurs systèmes de représentation, des médias 2D aux modèles 3D et vice versa, également par utilisation de la plateforme d'annotation collaborative Aïoli.Les données 2D et 3D, classées par l'IA, sont alors exploitées en vue de la reconstruction de modèles BIM à partir des données de relevé qui ont été annotées sémantiquement (Scan-to-BIM).Le modèle numérique obtenu au terme du processus est une représentation sémantiquement structurée, qui peut ensuite être enrichie par l'insertion d'annotations localisées (relatives à la cartographie des matériaux, à la dégradation etc.), par exemple à des fins de restauration et de conservation.Cette thèse est développée dans le cadre d'un accord international de co-tutelle de thèse entre institutions de recherche italiennes et françaises. L'approche méthodologique proposée est donc évaluée en se référant à des études de cas représentatives du patrimoine architectural italien et français: parmi celles-ci, la Cathédrale Notre-Dame de Paris et la Chartreuse de Pise. Les résultats de l'application de la méthodologie proposée sont évalués en considérant, au cas par cas, les besoins spécifiques de description, d'analyse et de restitution.On esquisse un cadre de réalisation et utilisation de modèles numériques sémantiquement riches, qui sera mis à la disposition des restaurateurs, ingénieurs, architectes, archéologues, historiens et autres experts, continuellement confrontés aux aspects de fusion, traitement et connexion numérique des données sur le patrimoine.
... It is known that there are additional sources of noise in the LiDAR scanning process, especially environmental conditions, such as temperature and atmospheric pressure variations, dust, steam, or interfering radiation [45]. However, most of them have little impact on the result, especially on TLS. ...
Article
Full-text available
In this work, we present an efficient GPU-based LiDAR scanner simulator. Laser-based scanning is a useful tool for applications ranging from reverse engineering or quality control at an object scale to large-scale environmental monitoring or topographic mapping. Beyond that, other specific applications require a large amount of LiDAR data during development, such as autonomous driving. Unfortunately, it is not easy to get a sufficient amount of ground truth data due to time constraints and available resources. However, LiDAR simulation can generate classified data at a reduced cost. We propose a parameterized LiDAR to emulate a wide range of sensor models, from airborne to terrestrial scanning. OpenGL’s compute shaders are used to massively generate beams emitted by the virtual LiDAR sensors and solve their collision with the surrounding environment, even with multiple returns. Our work is mainly intended for the rapid generation of datasets for neural networks, consisting of hundreds of millions of points. The conducted tests show that the proposed approach outperforms a sequential LiDAR scanning. Its capabilities for generating huge labeled datasets have also been shown to improve previous studies.
... Airborne topographic LiDAR system measures topographic features height in the form of point clouds very precisely in discrete spacing with high density in several kHz to MHz. The airborne LiDAR system by direct geo-referencing technique for each laser range is geo-referenced using position and orientation data measured onboard by IMU and GPS sensors to provide absolute terrain features height with an accuracy of sub-decimeter to sub-meter (Böhler et al., 2003). Airborne LiDAR systems were introduced in the late 1990s commercially with the capability of measuring height information alone and added subsequently capabilities of measurement of heights with reflected energy, multi-return measurements from single pulse and recording waveforms in the near-infrared region of EMR from wavelength 900 nm to 1550 nm (Shan and Toth, 2018, p. 165). ...
Article
Airborne topographic LiDAR sensor acquires precise dense 3D point clouds of the terrain along with backscattered energy in the form of intensity in discrete spacing. Even though LiDAR sensor provides both height and intensity information, only height information has been exploited extensively to derive various terrain products and applications compared to intensity content as it has certain limitations (as all terrain features are not spectrally separable due to single wavelength and not being continuous spatially). However, an image generated using intensity data has certain advantages, i.e., the image being true ortho and does not have a positional mismatch between extracted features with height information such as contours. These properties are essential for any image data to use in conjunction with terrain height information in preparing topographic maps. On the other hand, normalized DSM (nDSM) that is generated from LiDAR point cloud contains geometric information and terrain feature heights with respect to ground. This enables intra-classification of features based on height information. However, nDSM suffers in representation of edges of features distinctly. This is overcome by adding monoscopic depth to nDSM by shading with low elevation and azimuth and adding color to it based on feature heights. This paper outlines the improving the intensity image content by fusing geometric information and monoscopic depth present in shaded colored nDSM generated from airborne LiDAR point cloud. The outcome of fused image has been termed as LIDARORTHO and also studied the potential of generating geospatial content from LIDARORTHO image and its positional accuracy.
... On 18 April 2021, the measurements to carry out the normality tests were taken between 10:30 and 12:15. The environmental conditions were controlled at all times as described by [59][60][61][62][63], that is, temperature, pressure, and humidity values were recorded, and the experiment was carried out in a closed room to reduce the effects of interfering radiation. These records can be seen in Table 7. ...
Article
Full-text available
Abstract: There is a growing demand for measurements of natural and built elements, which require quantifiable accuracy and reliability, within various fields of application. Measurements from 3D Terrestrial Laser Scanner come in a point cloud, and different types of surfaces such as spheres or planes can be modelled. Due to the occlusions and/or limited field of view, it is seldom possible to survey a complete feature from one location, and information has to be acquired from multiple points of view and later co-registered and geo-referenced to obtain a consistent coordinate system. The aim of this paper is not to match point clouds, but to show a methodology to adjust, following the traditional topo-geodetic methods, 3DTLS data by modelling references such as calibrated spheres and checker-boards to generate a 3D trilateration network from them to derive accuracy and reliability measurements and post-adjustment statistical analysis. The method tries to find the function that best fits the measured data, taking into account not only that the measurements made in the field are not perfect, but that each one of them has a different deviation depending on the adjustment of each reference, so they have to be weighted accordingly.
... Laser scanners [9], [47], [100], seen in Fig. 4, use a laser to project dot or stripe patterns over the scene. Laser scanners present sub-millimeter accuracy [22], [161], [174] and a simpler decoding procedure with respect to projector-based scanners [39]. However, laser scanners usually suffer from a slow scanning time, since the laser line needs to sweep the whole body [43]. ...
Article
Full-text available
The understanding of body measurements and body shapes in and between populations is important and has many applications in medicine, surveying, the fashion industry, fitness, and entertainment. Body measurement using 3D surface scanning technologies is faster and more convenient than measurement with more traditional methods and at the same time provides much more data, which requires automatic processing. A multitude of 3D scanning methods and processing pipelines have been described in the literature, and the advent of deep learning-based processing methods has generated an increased interest in the topic. Also, over the last decade, larger public 3D human scanning datasets have been released. This paper gives a comprehensive survey of body measurement techniques, with an emphasis on 3D scanning technologies and automatic data processing pipelines. An introduction to the three most common 3D scanning technologies for body measurement, passive stereo, structured light, and time-of-flight, is provided, and their merits w.r.t. body measurement are discussed. Methods described in the literature are discussed within the newly proposed framework of five common processing stages: preparation, scanning, feature extraction, model fitting, and measurement extraction. Synthesizing the analyzed prior works, recommendations on specific 3D body scanning technologies and the accompanying processing pipelines for the most common applications are given. Finally, an overview of about 80 currently available 3D scanners manufactured by about 50 companies, as well as their taxonomy regarding several key characteristics, is provided in the Appendix.
... Since its inception, laser scanning technology has been represented as an object of interest for the community of measurements science and surveyors. The extraordinary potential of this technology has been immediately highlighted and the calibration issues of the various sensors and measurement systems (both time-of-flight and phase-shift scanners) have been deeply investigated (Boehler, Bordas Vicent, & Marbs, 2003). ...
Conference Paper
Full-text available
In several cases, in the framework of cultural heritage documentation projects that prefigure the generation of dense and detailed 3D models derived from range-based or image-based techniques, the level of detail and surface characterization of the materials are strictly important, also for evaluating the conservation status of the structures. The research presented in this paper aims to evaluate the advantages and the critical issues of using a telescopic pneumatic pole to raise the position of the scans from the ground and decrease the angle of incidence of the laser beam on the surveyed object. The study also takes into consideration the use of mini UAVs and their flexibility to effectively acquire the vertical surfaces of interest even at elevated heights, comparing the density and the roughness of the derived model in comparison to the one generated by the TLS technique.
Article
Full-text available
With the rapid development of 3D reconstruction, especially the emergence of algorithms such as NeRF and 3DGS, 3D reconstruction has become a popular research topic in recent years. 3D reconstruction technology provides crucial support for training extensive computer vision models and advancing the development of general artificial intelligence. With the development of deep learning and GPU technology, the demand for high-precision and high-efficiency 3D reconstruction information is increasing, especially in the fields of unmanned systems, human-computer interaction, virtual reality, and medicine. The rapid development of 3D reconstruction is becoming inevitable. This survey categorizes the various methods and technologies used in 3D reconstruction. It explores and classifies them based on three aspects: traditional static, dynamic, and machine learning. Furthermore, it compares and discusses these methods. At the end of the survey, which includes a detailed analysis of the trends and challenges in 3D reconstruction development, we aim to provide a comprehensive introduction for individuals who are currently engaged in or planning to conduct research on 3D reconstruction. Our goal is to help them gain a comprehensive understanding of the relevant knowledge related to 3D reconstruction.
Article
Full-text available
Hydrogen is a fuel having the highest energy compared with other common fuels. This means hydrogen is a clean energy source for the future. However, using hydrogen as a fuel has implication regarding carrier and storage issues, as hydrogen is highly inflammable and unstable gas susceptible to explosion. Explosions resulting from hydrogen-air mixtures have already been encountered and well documented in research experiments. However, there are still large gaps in this research field as the use of numerical tools and field experiments are required to fully understand the safety measures necessary to prevent hydrogen explosions. The purpose of this present study is to develop and simulate 3D numerical modelling of an existing hydrogen gas station in Jeonju by using handheld LiDAR and Ansys AUTODYN, as well as the processing of point cloud scans and use of cloud dataset to develop FEM 3D meshed model for the numerical simulation to predict peak-over pressures. The results show that the Lidar scanning technique combined with the ANSYS AUTODYN can help to determine the safety distance and as well as construct, simulate and predict the peak over-pressures for hydrogen refueling station explosions.
Article
Full-text available
Modern technologies are commonly used to inventory different architectural or industrial objects (especially cultural heritage objects and sites) to generate architectural documentation or 3D models. The Terrestrial Laser Scanning (TLS) method is one of the standard technologies researchers investigate for accurate data acquisition and processing required for architectural documentation. The processing of TLS data to generate high-resolution architectural documentation is a multi-stage process that begins with point cloud registration. In this step, it is a common practice to identify corresponding points manually, semi-manually or automatically. There are several challenges for the TLS point cloud processing in the data registration process: correct spatial distribution, marking of control points, automation, and robustness analysis. This is particularly important when large, complex heritage sites are investigated, where it is impossible to distribute marked control points. On the other hand, when orientating multi-temporal data, there is also the problem of corresponding reference points. For this reason, it is necessary to use automatic tie-point detection methods. Therefore, this article aims to evaluate the quality and completeness of the TLS registration process using 2D raster data in the form of spherical images and Affine Hand-crafted and Learned-based detectors in the multi-stage TLS point cloud registration as test data; point clouds were used for the historic 17th-century cellars of the Royal Castle in Warsaw without decorative structures, two baroque rooms in the King John III Palace Museum in Wilanów with decorative elements, ornaments and materials on the walls and flat frescoes, and two modern test fields, narrow office, and empty shopping mall. The extended Structure-from-Motion was used to determine the tie points for the complete TLS registration and reliability analysis. The evaluation of detectors demonstrates that for the test sites exhibiting rich textures and numerous ornaments, a combination of AFAST, ASURF, ASIFT, SuperGlue and LoFTR can be effectively employed. For the point cloud registration of less textured buildings, it is advisable to use AFAST/ASIFT. The robust method for point cloud registration exhibits comparable outcomes to the conventional target-based and Iterative Closest Points methods.
Article
The interaction between laser beams and backscattering object surfaces lies at the fundamental working principle of any Terrestrial Laser Scanning (TLS) system. Optical properties of surfaces such as concrete, metals, wood, etc., which are commonly encountered in structural health monitoring of buildings and structures, constitute an important category of systematic and random TLS errors. This paper presents an approach for considering the random errors caused by object surfaces. Two surface properties are considered: roughness and reflectance. The effects on TLS measurements are modeled stepwise in form of a so-called synthetic variance-covariance matrix (SVCM) based on the elementary error theory. A line of work is continued for the TLS stochastic model by introducing a new approach for determining variances and covariances in the SVCM. Real measurements of cast stone façade elements of a tall building are used to validate this approach and show that the quality of the estimation can be improved with the appropriate SVCM.
Article
The rapid development of intelligent driving, unmanned aerial vehicles, artificial intelligence, virtual reality and other related technologies has put forward higher requirements for three-dimensional image processing technology. To obtain more specific, visualized data, people are no longer restricted to two-dimensional space image recognition and are paying more attention to three-dimensional object recognition and reconstruction. This article introduces the principle of a three-dimensional camera, camera calibration technology and laser triangulation based on the non-contact laser measurement method and follows the above principles to complete the overall hardware selection design, platform construction, system calibration, model parameter analysis and evaluation of the laser scanning three-dimensional measurement system. This article develops a three-dimensional scanning reconstruction experimental device with straightforward operation and a broad range of applications based on the integrated line laser measurement instrument. The experimental model is obtained using the method and device design described in this article, and an algorithm flow is suggested. This civilization uses laser measurement techniques in the specified experimental environment to obtain the original point cloud data of the three-dimensional model and to evaluate the point cloud data for alignment to determine the three-dimensional coordinates. Finally, the measurement errors are examined, and the feasibility and accuracy of the method are verified by comparing the measured and predicted data of the three-dimensional objects. It is demonstrated that the method's relative error is less than 0.1590%, and its accuracy and productivity are adequate for the majority of commercial three-dimensional reconstruction applications.
Article
Full-text available
Stock taking is a series of activities to calculate the stock of goods that are still stored in the warehouse to be marketed. There are many activities covered in it, ranging from calculating the number of goods, conducting direct inspections, and structuring that will facilitate business operations when a certain product is needed. One of these activities is also carried out in the mining sector. Coal stock-taking is a survey activity carried out in the coal yard area to calculate the volume of the stockpile and coal tonnage after being multiplied by the density value. Large-dimensional coal stocking must be carried out quickly, accurately and in detail. The need for this can be obtained using laser scanner technology. Laser scanner is a tool designed to scan the surface of an object and represent it in 3D in the form of a height density point cloud. Based on this, in carrying out stock-taking calculation activities, measurements are needed which mainly aim to find out the stockpile volume and density in the fourth quarter of the Adipala PLTU Coal Yard. Stockpile measurement method using volumetric method. Measurement using a Laser Scanner tool to obtain the shape of the stockpile area is by seizing the entire surface of the Stockpile by setting the resolution of the density of coordinate points (x, y, z) as needed. Tool displacement when measurements are made on every detail of the Stockpile curve. Based on the calculation results, it is known that the volume value of the coal stockpile on the west side coal yard is 121,420,574 m3 and the east side coal yard is 88,230,355 m3 on. The total volume of coal amounted to 209,650,929 m3 then multiplied by the density of the bulk density survey results and obtained the tonnage of 180,384,417 MT.
Thesis
[link to the full text of the dissertation, available as open access: https://nbn-resolving.org/urn:nbn:de:bsz:14-qucosa2-836159] Neue Technologien im Bereich der Bauwerks- und Schadenserfassung führen zu einer Automatisierung und damit verbundenen Effizienzsteigerung von Inspektionsprozessen. Eine adäquate Digitalisierung des erfassten Bauwerkzustandes in ein BIM-Modell ist jedoch gegenwärtig nicht problemlos möglich. Eine Hauptursache hierfür sind fehlende Spezifikationen für ein digitales Modell, das aufgenommene Schäden repräsentieren kann. Ein Problem bilden dabei Unschärfen in der Informationsmodellierung, die üblicherweise bei BIM-Verfahren im Neubau nicht auftreten. Unscharfe Informationen, wie z.B. die Klassifizierung detektierter Schäden oder Annahme weiterer verborgener Schäden, werden derzeit manuell von Experten evaluiert, was oftmals eine aufwendige Auswertung kontextueller Informationen in einer Vielzahl verteilter Bauwerksdokumente erfordert. Eine automatisierte Bewertung detektierter Schäden anhand des Bauwerkskontextes wird derzeit noch nicht in der Praxis umgesetzt. In dieser Dissertation wird ein Konzept zur Repräsentation von Bauwerksschäden in einem digitalen, generisch strukturierten Schadensmodell vorgestellt. Das entwickelte Konzept bietet hierbei Lösungsansätze für Probleme gegenwärtiger Schadensmodellierung, wie z.B. die Verwaltung heterogener Dokumentationsdaten, Versionierung von Schadensobjekten oder Verarbeitung der Schadensgeometrie. Das modulare Schema des Schadensmodells besteht aus einer generischen Kernkomponente, die eine allgemeine Beschreibung von Schäden ermöglicht, unabhängig von spezifizierenden Faktoren, wie dem betroffenen Bauwerkstyp oder Baumaterial. Zur Definition domänenspezifischer Informationen kann die Kernkomponente durch entsprechende Erweiterungsschemata ergänzt werden. Als präferierte Serialisierungsmöglichkeit wird das Schadensmodell in einer wissensbasierten Ontologie umgesetzt. Dies erlaubt eine automatisierte Bewertung der modellierten Schadens- und Kontextinformationen unter Nutzung digitalisierten Wissens. Zur Evaluation unscharfer Schadensinformationen wird ein wissensbasiertes Bewertungsverfahren vorgestellt. Das hierbei entwickelte Schadensbewertungssystem ermöglicht eine Klassifizierung detektierter Schäden, sowie Folgerung impliziter Bewertungsinformationen, die für die weitere instandhalterische Planung relevant sind. Außerdem ermöglicht das Verfahren eine Annahme undetektierter Schäden, die potentiell im Inneren des Bauwerks oder schwer erreichbaren Stellen auftreten können. In der ontologischen Bewertung werden dabei nicht nur Schadensmerkmale berücksichtigt, sondern auch Informationen bezüglich des Bauwerkskontext, wie z.B. der betroffene Bauteil- oder Materialtyp oder vorliegende Umweltbedingungen. Zur Veranschaulichung der erarbeiteten Spezifikationen und Methoden, werden diese abschließend an zwei Testszenarien angewendet.
Chapter
Cultural heritage is an invaluable resource for societies. Inherited from past generations, it must be preserved and safeguarded for posterity. However, it is threatened by several factors, including natural disasters and those caused by human actions. In this regard, the need for conservation of cultural heritage is an indisputable reality. Digital documentation is considered an important tool, providing precision, in the recording of physical features and peculiarities of heritage. On the other hand, when acquired documents are digitally archived, they can be used for numerous purposes, such as conservation and management of heritage. In the instance of minor or major damage to built heritage, these archives can be highly useful in the restoration process. In recent years, due to considerable developments in technology and digital tools, the techniques used for documentation of historical buildings have been also significantly improved, leading to a better standard of monument conservation. Accordingly, recognition and exploitation of the most recent technologies and techniques in the field of cultural heritage are of primary importance. Deploying new methods of documentation significantly reduces costs, expedites the process of surveying, and also ensures an accurate output. This paper investigates the application of digital techniques of documentation in cultural heritage conservation. Additionally, it offers an overview of the advantages and limitations of the most widely used techniques, including terrestrial laser scanning, low-cost photogrammetry methods, and the application of unmanned aerial vehicle (UAV) platforms.KeywordsCultural heritageDocumentation techniquesPhotogrammetryLaser scanningUnmanned aerial vehicles3D modeling
Book
Full-text available
Throughout the last two decades, archaeological research has undergone a profound conceptual and instrumental transformation, first because of the scientific knowledge and then due to the generalisation of numerous software and digital resources that have been conceived and applied to obtain a remarkable enhancement of our archaeological knowledge. This widespread trend began with the publication of works such as Virtual Archaeology (1997), by Maurizzio Forte, a genuine milestone in understanding how these specialities of the archaeological discipline have evolved. The increase in information provided by these multidisciplinary approaches was preceded by the application of experimental sciences to archaeology, noticeable from the 1950s onwards, with the discovery of carbon-14 and the appearance of journals such as Archaeometry. Archaeometry dominates virtually all epistemological developments in today’s archaeological sciences, and this is justified by the ease that digital technologies have brought to the process of understanding specialised archaeometric techniques. In this sense, approaches to the records from antiquity from the study of architecture, geography, topography, chemistry, and geology are being fundamental in consolidating these new research lines. Digital applications have allowed the development of these multidisciplinary perspectives, their correct combination, and the projection of their results in an attractive and easily understandable environment, in such a way that they have allowed the general understanding of complex specialities that, on their own, would only be attainable for researchers and specialists. It is on this postulate that the colloquium Scanning the hidden. LiDAR and 3D technologies applied to architecture research in the archaeology of Metal Ages was proposed in 2019, as part of the annual activities of the Metal Ages in Europe Commission of the Union Internationale des Sciences Préhistoriques et Protohistoriques (UISPP). These pages are a compendium of their proceedings, including their most extensive and main contributions. They summarise different works developed under these methodologies and focused on the Iberian Peninsula, mostly during the first millennium BC. Among them, I would like to highlight those produced by the project that inspired this congress: Protohistoric Architecture in the Western Spanish Plateau. Archaeotecture and Archaeometry applied to the Built Heritage of the Vettones Hillforts (HAR2016-77739-P), of the State Programme for the Promotion of Scientific and Technical Research of Excellence, Sub-programme for the Generation of Knowledge of the former Spanish Ministry of Science and Innovation. The original aim of this colloquium was drafting a key document that would bring together the main conclusions of the studies presented and the debates that would ensue, following the premises of the London Charter (2009) and, shortly afterwards, the so-called Seville Principles (2009), but focusing on the architecture of recent prehistory. Unfortunately, the COVID-19 pandemic truncated all the initial expectations, and the desired approach was strongly conditioned by the health restrictions that led to the postponement of the colloquium at the scheduled dates, June 2020, with a considerable decrease in the number of participants. The relative improvement of the pandemic situation and, paradoxically, the possibilities of digital media have allowed the colloquium to be held on the same dates as planned, but one year later. The virtual Throughout the last two decades, archaeological research has undergone a profound conceptual and instrumental transformation, first because of the scientific knowledge and then due to the generalisation of numerous software and digital resources that have been conceived and applied to obtain a remarkable enhancement of our archaeological knowledge. This widespread trend began with the publication of works such as Virtual Archaeology (1997), by Maurizzio Forte, a genuine milestone in understanding how these specialities of the archaeological discipline have evolved. The increase in information provided by these multidisciplinary approaches was preceded by the application of experimental sciences to archaeology, noticeable from the 1950s onwards, with the discovery of carbon-14 and the appearance of journals such as Archaeometry. Archaeometry dominates virtually all epistemological developments in today’s archaeological sciences, and this is justified by the ease that digital technologies have brought to the process of understanding specialised archaeometric techniques. In this sense, approaches to the records from antiquity from the study of architecture, geography, topography, chemistry, and geology are being fundamental in consolidating these new research lines. Digital applications have allowed the development of these multidisciplinary perspectives, their correct combination, and the projection of their results in an attractive and easily understandable environment, in such a way that they have allowed the general understanding of complex specialities that, on their own, would only be attainable for researchers and specialists. It is on this postulate that the colloquium Scanning the hidden. LiDAR and 3D technologies applied to architecture research in the archaeology of Metal Ages was proposed in 2019, as part of the annual activities of the Metal Ages in Europe Commission of the Union Internationale des Sciences Préhistoriques et Protohistoriques (UISPP). These pages are a compendium of their proceedings, including their most extensive and main contributions. They summarise different works developed under these methodologies and focused on the Iberian Peninsula, mostly during the first millennium BC. Among them, I would like to highlight those produced by the project that inspired this congress: Protohistoric Architecture in the Western Spanish Plateau. Archaeotecture and Archaeometry applied to the Built Heritage of the Vettones Hillforts (HAR2016-77739-P), of the State Programme for the Promotion of Scientific and Technical Research of Excellence, Sub-programme for the Generation of Knowledge of the former Spanish Ministry of Science and Innovation. The original aim of this colloquium was drafting a key document that would bring together the main conclusions of the studies presented and the debates that would ensue, following the premises of the London Charter (2009) and, shortly afterwards, the so-called Seville Principles (2009), but focusing on the architecture of recent prehistory. Unfortunately, the COVID-19 pandemic truncated all the initial expectations, and the desired approach was strongly conditioned by the health restrictions that led to the postponement of the colloquium at the scheduled dates, June 2020, with a considerable decrease in the number of participants. The relative improvement of the pandemic situation and, paradoxically, the possibilities of digital media have allowed the colloquium to be held on the same dates as planned, but one year later. The virtual 16 development of the working meetings over the past year has made it possible to hold this colloquium in a “semi-presential” format, whereby only the organisers and some of the speakers were physically present at the meeting, while the vast majority of contributions and keynote speeches were streamed via a well-known virtual communication platform. In this digital environment, around twenty specialists presented and defended their most recent research on protohistoric architecture, in which different digital techniques have been applied to obtain first-rate scientific data. Other works were also included, focused on diffusion and divulgation, gather under the term “dissemination”. Among them, we presented the results of our intense museological research on the hillforts of the western Spanish Plateau, directed by Professor Castelo, which had its counterpart in some similar national and international projects, such as the one carried out by our Italian colleagues in the region of Molise or that of the research team of the site of Monte Bernorio (Villarén de Valdivia, Palencia). This monograph has been published mainly with founding from the HAR2016-77739-P project Protohistoric Architecture in the Western Spanish Plateau. Unfortunately, the pandemic prevented the project from achieving its objectives on schedule — at the end of 2020 —, and the Spanish Ministry did not agree to grant the requested one-year extension, so we had to accelerate the publication process. Therefore, due to lack of time, some of the fascinating contributions that were presented and discussed at the congress are not included in these proceedings. This is the case of the lecture presented by the IAM team lead by Sebastián Celestino, “What escapes sight: 3D documentation methodologies, analysis and reconstruction applied to the knowledge of the architecture of the First Iron Age in the middle Guadiana Valley”, as imponderable factors arising from the pandemic prevented their text from being ready for publication in such a short period. Something similar happened with the work carried out at El Cabezo de la Fuente del Murtal (Alhama de Murcia, Murcia). Fortunately, most contributors were able to send their papers and they are included in these proceedings. Among these, most of the chapters focus on the study and research of built structures — whether defensive, such as walls, or domestic, such as houses — from Iron Age settlements on the Iberian Peninsula. The application of different digital techniques and resources (GIS, remote-controlled images, 3D scans, etc.) has made it possible to obtain innovative and quality data that only a few years ago would have been inconceivable from the traditional research approaches. The potential of this technology has been particularly evident in “fragile” archaeological environments: buildings and structures whose original contexts are unknown, either because they are not sufficiently preserved or because they have been displaced from their original sites. This is the case of the warrior stelae from the Late Bronze Age or the “verraco” sculptures from the Late Iron Age, both structures analogous to the “urban furniture” of our towns and cities. But also the rock architecture of settlements such as La Silla del Papa (Tarifa, Cádiz) or Ulaca (Solosancho, Ávila), where the lack of strata often prevents us from documenting the original contexts in which these buildings were constructed and inhabited. Other comparable cases, such as the so-called “gatehouse” of La Mesa de Miranda (Chamartín, Ávila), lack valid stratigraphies because they were excavated many decades ago, when the archaeological methodology of record did not have the means or the notions needed for the identification and interpretation of certain stratigraphic units. Likewise, the recording and study of particularly delicate artefacts due to their vulnerability can be carried out thanks to photogrammetric surveys and high-precision 3D scans, as has been the case of the courtyard of the Tartessian building at Casas del Turuñuelo (Guareña, Badajoz). A part no less important of the studies presented here was oriented towards archaeotopography, assisted by orthophotos and LiDAR data. Sites analysed using this methodology have seen their data expanded to an extent that would have been inconceivable only a few decades ago. The work of the research team of El Cabezo de la Fuente del Murtal (Alhama de Murcia, Murcia) is a good example of 17 this, as has also been the case at other particular sites, such as Plaza de Moros (Villatobas, Toledo), El Raso de Candeleda (Ávila), or Las Merchanas (Lumbrales, Salamanca). The third group of contributions focuses on the archaeometric study of the materials used in the construction of these buildings. General works such as those presented by Rosario García Giménez, Francisco Blanco and Gregorio Manglano, which focused on the study areas of our own project, the western Northern Plateau, have been complemented by other specific studies, such as the one dedicated to the earthen architecture of the Cantabrian Façade, in which geochemical analysis have been combined with the production of 3D models in order to identify artefacts and their possible functions. Some of these cross-cultural studies were even applied to other materials — such as objects of personal adornment — to test the effectiveness of these strategies in generating quality and verifiable information. The same was true for the analysis of vitrified, calcined or simply rubefacted stones from walls with obvious signs of fire. Important testimonies and vestiges were obtained to advance in an almost unknown field of peninsular protohistoric architecture, providing not only the demonstration of the use of wooden beams in internal frames but also the possibility, with a fair degree of verisimilitude, of the use of iron nails to bind the latter together. The last group of contributions is the first of those mentioned above: works focused on the musealisation of these vestiges from a virtual perspective, which has dramatically increased as a consequence of the COVID-19 pandemic. Despite the quality and quantity of virtual museums that have proliferated in the last year, the participants in this colloquium were strongly in favour of supporting faceto- face visits to museums and archaeological sites. Virtual museology, like virtual archaeology, cannot be an alternative to presence, but a complement that helps and stimulates both research and cultural tourism. For this and for other reasons implied in the desire of this colloquium to contribute, from digital technology, to the advancement of higher quality research, we have drafted a series of criteria in order to remind and guide its application from the strictest deontological requirements. These criteria, following the old tradition of naming such agreements after the city where they were formulated, are known as the Ávila Criteria and are the key achievement of this colloquium. This event has been possible thanks to the patience and generosity of three institutions that supported us unconditionally, even in the most difficult circumstances of the last months: the Provincial Council of Ávila, the Ávila Foundation and the Universidad Autónoma de Madrid Foundation. It would be not fair to conclude this introduction without expressing our enormous gratitude to county council member for Ávila, Eduardo Duque Pindado, president of the Committee for Culture, Heritage, Youth and Sport, who supported us from the beginning and granted us the necessary financial support to hold this colloquium. We would also like to thank María Dolores Ruíz-Ayúcar Zurdo, president of the Ávila Foundation, for the free loan of the Palacio de los Serrano, a magnificent 16th-century building that has been extraordinarily refurbished for meetings of various kinds as well as for art exhibitions and other cultural activities (BEX Award 2006). Both she and its director, Laura Marcos, as well as the staff of the palace, facilitated the celebration of this colloquium in a semi-presential format and with all the safety measures required by law at the time. No less important has been the coverage and support received from the Universidad Autónoma de Madrid Foundation, its director, Fidel Rodríguez Batall, and its technical staff, among whom we would like to acknowledge Inmaculada Martín, coordinator of conferences and events (CongresUAM), Cristina García Recuerdo, coordinator of contracts and projects, and José Antonio Martín Bravo, from the Treasury and IT department, for their continuous availability and kindness. Together with all of them, many other colleagues and friends have supported and accompanied us in this venture, especially Professor Marta Díaz-Guardamino, from Durham University (UK), who delighted us with a splendid inaugural lecture; Professor Dirk Brandherm, from Queen’s University of 18
Chapter
Full-text available
La aplicación de las nuevas tecnologías al análisis de los restos arqueológicos, en toda su diversidad, está suponiendo un salto cualitativo importante no solo en lo que a las tareas de investigación se refiere, sino también a la difusión y puesta en valor del patrimonio. En este capítulo vamos a abordar las posibilidades que ofrecen al investigador las técnicas tridimensionales en cuanto a la obtención de un mejor conocimiento de los materiales arqueológicos de la Edad del Hierro meseteña relacionados, directa o indirectamente, con la construcción. Para ello, y como paso previo obligado, haremos una recopilación de dichos materiales de estudio, reunidos en tres categorías: elementos de carácter estructural, elementos decorativos y, en tercer lugar, útiles y herramientas. En segundo lugar, realizaremos una descripción de las técnicas tridimensionales y los resultados que se pueden obtener de su aplicación a estos materiales.
Article
Full-text available
Digital twins of measurement systems are used to estimate their measurement uncertainty. In the past, virtual coordinate measuring machines have been extensively researched. Research on digital twins of optical systems is still lacking due to the high number of error contributors. A method to describe a digital twin of an optical measurement system is presented in this article. The discussed optical system is a laser line scanner mounted on a coordinate measuring machine. Each component of the measurement system is mathematically described. The coordinate measuring machine focuses on the hardware errors and the laser line scanner determines the measurement error based on the scan depth, in-plane angle and out-of-plane angle. The digital twin assumes stable measurement conditions and uniform surface characteristics. Based on the Monte Carlo principle, virtual measurements can be used to determine the measurement uncertainty. This is demonstrated by validating the digital twin on a set of calibrated ring gauges. Two validation tests are performed: the first verifies the virtual uncertainty estimation by comparison with experimental data. The second validates the measured diameter of different ring gauges by comparing the estimated confidence interval with the calibrated diameter.
Chapter
As one of the important storage facilities for petroleum and petrochemical products, storage tanks are widely used by enterprises. With the increase of tank service time, tank deformation becomes one of the safety hazards of tank. The traditional Radian ruler, whole station instrument and other methods are slow to check the speed, it is difficult to meet the needs of the scene. In this paper, three-dimensional laser scanning technique is applied to tank deformation detection, the characteristics of cloud distribution of tank deformation point are analyzed, and the deformation of tank is evaluated by using point cloud data. The results show that 3D laser scanning technology can effectively detect tank deformation and provide technical support for improving tank risk control ability.
Article
Full-text available
In recent times, interest in the study of engineering structures has been on the rise as a result of improvement in the tools used for operations such as, As-built mapping, deformation studies to modeling for navigation etc. There is a need to be able to model structure in such way that accurate needed information about positions of structures, features, points and dimensions can be easily extracted without having to pay physical visits to site to obtain measurement of the various components of structures. In this project, the data acquisition system used is the terrestrial laser scanner, High Definition Surveying (HDS) equipment; the methodology employed is similar to Close Range Photogrammetry (CRP). CRP is a budding technique or field used for data acquisition in Geomatics. It is a subset of the general photogrammetry; it is often loosely tagged terrestrial photogrammetry. The terrestrial laser scanning technology is a data acquisition system similar to CRP in terms of deigning the positioning of instrument and targets, calibration, ground control point, speed of data acquisition, data processing (interior, relative and absolute orientation) and the accuracy obtainable. The aim of this project was to generate the three-dimensional model of structures in the Faculty of Engineering, University of Lagos using High Definition Surveying, the Leica Scan Station 2 HDS equipment was used along with Cyclone software for data acquisition and processing. The result was a 3D view (of point clouds) of the structure that was studied, from which features were measured from the model generated and compared with physical measurement on site. The technology of the laser scanner proved to be quite useful and reliable in generating three dimensional models without compromising accuracy and precision. The generation of the 3D models is the replica of reality of the structures with accurate dimensions and location.
Article
Full-text available
Terrestrial laser imaging systems offer a new means for rapid precise mapping of objects up to ranges of up to a few hundred metres from the instrument location. The high sampling frequency (e.g., several kilohertz) available from such instrumentation can provide a spatial data density of directly observed co-ordinates far in excess of that available with photogrammetric techniques. For structural monitoring applications, this permits measurement of entire surfaces rather than a few discrete points, thus providing a wealth of information about the deformation modes of a body. This paper conveys the findings of a preliminary study of the resolution and accuracy of a commercially available laser-scanning system. Testing was conducted on a first order EDM calibration baseline and on a three-dimensional deformation-monitoring network. Single point range accuracies of ±3-5 cm (1σ) were achieved. Evidence of uncompensated systematic errors, probably due to instrumental set-up errors and target centre reduction, was detected.