Figure 1 - uploaded by F. A. Kruse
Content may be subject to copyright.
LiDAR point cloud data view showing individual 3D feature level information. Individual points are color coded by Mean Sea Level (MSL) height.

LiDAR point cloud data view showing individual 3D feature level information. Individual points are color coded by Mean Sea Level (MSL) height.

Source publication
Conference Paper
Full-text available
The advent of Light Detection and Ranging (LiDAR) point cloud collection has significantly improved the ability to model the world in precise, fine, three dimensional detail. The objective of this research was to demonstrate accurate, foundational methods for fusing LiDAR data and photogrammetric imagery and their potential for change detection. Th...

Contexts in source publication

Context 1
... point cloud data can contain a lot of information in 3D space [1]. If dense enough, this can be very useful at the individual feature level and can provide realistic views that can be rotated and viewed from many angles (Figure 1). Experts from the Computer Vision community, Photogrammetry community, and LiDAR community all have an interest in solving problems of LiDAR and imagery fusion [2] [3] [4] [5]. ...
Context 2
... following figure (Figure 10) represents only those objects that remain the same. The technique is to remove the change outliers, keeping only things that stayed in the same location. ...
Context 3
... -, Figure 10. Display of the first cut edit of the difference data clipped at plus and minus 2 meters to remove large outliers. ...
Context 4
... "holes" are where the outliers existed. Amazingly, 34% of the points have been clipped but the contents of the scene are still discernible ( Figure 10). ...
Context 5
... further clipping the outlier free data using the statistical 95% confidence interval equation (mean plus or minus two times the standard deviation), subtle differences are detected -almost at the sensor characteristic level. These subtle differences are shown in the following Figure 11. Figure 11. ...
Context 6
... subtle differences are shown in the following Figure 11. Figure 11. Display of the difference data after further clipping at the 95% confidence level. ...
Context 7
... Figure 11, notice the 50 cm difference (red) in the driveway on the upper right of the image (Location 1 identified with the red arrow). Next, there is a -50 cm difference (blue) in the walkway on extreme upper right of the image (Location 2 identified with the red arrow). ...
Context 8
... and inspection of the data was performed in a similar fashion to that performed with the NPS data. The following figure (Figure 12) shows a side-by-side comparison of the LiDAR point cloud and WV-1 stereo extracted point cloud. ...
Context 9
... following figure ( Figure 13) is the "first cut" removal of the outliers using a plus and minus 10 foot threshold around the approximate minus 4 foot bias. Fifteen percent (15%) of the point have been clipped as a result of this outlier removal, what remains are things that have not changed dramatically. ...
Context 10
... (2008). The area that has changed, in the upper right corner, stands out very noticeably as rectangular blue areas ( Figure 13, red box). This is seen more clearly in Figure 14. ...
Context 11
... area that has changed, in the upper right corner, stands out very noticeably as rectangular blue areas ( Figure 13, red box). This is seen more clearly in Figure 14. It is easy to pick out the 7 houses in the upper right corner of Figure 13 and central part of Figure 14 that have been built between the 2005 to 2008 time frame (normal - time variant -change). ...
Context 12
... is seen more clearly in Figure 14. It is easy to pick out the 7 houses in the upper right corner of Figure 13 and central part of Figure 14 that have been built between the 2005 to 2008 time frame (normal - time variant -change). Notice that even with the predominant noise around the homes and in the tree areas, the real change is easily noticeable as large "holes" in the data. ...
Context 13
... is seen more clearly in Figure 14. It is easy to pick out the 7 houses in the upper right corner of Figure 13 and central part of Figure 14 that have been built between the 2005 to 2008 time frame (normal - time variant -change). Notice that even with the predominant noise around the homes and in the tree areas, the real change is easily noticeable as large "holes" in the data. ...

Similar publications

Article
Full-text available
The scope of the paper is to present invariant quantities of point clouds, that is functions which take the same value when the point cloud is transformed via a matrix. These invariants, are addressed to certain variables, the construction of which is based on the least squares lines, drawn for each set of points and they are changed by means of no...

Citations

... However, for point clouds, the Euclidean distance is the most commonly used. For example, distances between points were directly calculated using CloudCompare software (CloudCompare, 2021;Girardeau-Montaut et al., 2005) to compare LiDAR and photogrammetric point clouds (Basgall et al., 2014). Here, cloud-to-cloud (C2C) distance and multiscale model-to-model cloud comparison (M3C2) distance are usually considered (Lague et al., 2013). ...
Article
Over recent decades, 3D point clouds have been a popular data source applied in automatic change detection in a wide variety of applications. Compared with 2D images, using 3D point clouds for change detection can provide an alternative solution offering different modalities and enabling a highly detailed 3D geometric and attribute analysis. This article provides a comprehensive review of point-cloud-based 3D change detection for urban objects. Specifically, in this study, we had two primary aims: (i) to ascertain the critical techniques in change detection, as well as their strengths and weaknesses, including data registration, variance estimation, and change analysis; (ii) to contextualize the up-to-date uses of point clouds in change detection and to explore representative applications of land cover and land use monitoring, vegetation surveys, construction automation, building and indoor investigations, and traffic and transportation monitoring. A workflow following the PRISMA 2020 rules was applied for the search and selection of reviewed articles, with a brief statistical analysis of the selected articles. Additionally, we examined the limitations of current change detection technology and discussed current research gaps between state-of-the-art techniques and engineering demands. Several remaining issues, such as the reliability of datasets, uncertainty in results, and contribution of semantics in change detection, have been identified and discussed. Ultimately, this review sheds light on prospective research directions to meet the urgent needs of anticipated applications.
... A point-to-point comparison, also known as the surface difference, is the most direct way of detecting changes between two point clouds. Basgall et al., 2014 utilized a subtraction method to calculate differences between LiDAR and stereo-photogrammetric point clouds. The Hausdorff distance was used by Kang et al. (2013) to calculated point-to-point distances in order to avoid issues related to local density variations. ...
Article
Full-text available
Automated change detection based on urban mobile laser scanning data is the foundation for a whole range of applications such as building model updates, map generation for autonomous driving and natural disaster assessment. The challenge with mobile LiDAR data is that various sources of error, such as localization errors, lead to uncertainties and contradictions in the derived information. This paper presents an approach to automatic change detection using a new category of generic evidence grids that addresses the above problems. Said technique, referred to as fuzzy spatial reasoning, solves common problems of state-of-the-art evidence grids and also provides a method of inference utilizing fuzzy Boolean reasoning. Based on this, logical operations are used to determine changes and combine them with semantic information. A quantitative evaluation based on a hand-annotated version of the TUM-MLS data set shows that the proposed method is able to identify confirmed and changed elements of the environment with F1-scores of 0.93 and 0.89.
... The most direct way of detecting changes between 3D data is a pointto-point comparison, which is also denoted as surface difference. [15] obtained changes by directly calculating the difference between LiDAR and stereophotogrammetric point clouds using the CloudCompare software. The changes of single buildings were detected by visual inspection. ...
Article
As the key to the construction progress monitoring, methods and strategies for change detection using 3D point clouds from various sources have been investigated for years. However, how to achieve object-level change detection with uncertainty evaluation is still an unsolved topic. Occlusions and noise in 3D points and other attribute information, such as colors, lead to problems in the task of change detection. In this paper, we present a semantic-aided change detection method aimed at monitoring construction progress using UAV-based photogrammetric point clouds. Our framework consists of two key parts, which identify changes in a progressive manner: The first part consists of the detection of geometric changes using occupancy-based spatial difference identification, which indicates the changes of occupancy in 3D space, comprising changes in appearance or shape of building objects. In occupancy-based change detection, occupancy conflicts of occupied space and empty space along the viewing rays of cameras can be detected by considering the sensor positions. At the same time, occlusions can be handled implicitly. The second part involves changes in semantics, which are used to detect changes where the occupancy-based change detection is not sufficient for presenting changes, due to limitations of parameter settings or lack of attribute information. By utilizing semantic segmentation results presented by class probabilities, the uncertainty of the semantic changes can be estimated. For the detection of geometric and semantic changes, Dempster--Shafer theory is applied to fuse information from data acquired in different time epochs to detect changes. Using the two different types of changes, we can fully consider the changes that may happen at the construction sites and define the differences between the changes. By utilizing the proposed change detection methods, changes with different characteristics, including geometric changes and semantic changes, can be correctly identified. In a specific example, for a construction period from Dec 12, 2014 to Jan 16, 2015, 97.8% of the changed areas could be successfully detected. For the other construction period, from Jan 16, 2015 to Feb 26, 2015, 93.6% of changes were correctly detected.
... The point clouds derived from UAV photogrammetric and LiDAR vary according to different aspects and several research works have been conducted for accuracy assessment of UAV-based photogrammetric and LiDAR point clouds [13,47,48]. Moreover, photogrammetric and LiDAR point clouds have been compared for many application such as change detection [49], flood mapping [50], agriculture [51,52], forestry applications [53][54][55][56][57] and urban tree species mapping [41]. ...
Article
Full-text available
Estimation of urban tree canopy parameters plays a crucial role in urban forest management. Unmanned aerial vehicles (UAV) have been widely used for many applications particularly forestry mapping. UAV-derived images, captured by an onboard camera, provide a means to produce 3D point clouds using photogrammetric mapping. Similarly, small UAV mounted light detection and ranging (LiDAR) sensors can also provide very dense 3D point clouds. While point clouds derived from both photogrammetric and LiDAR sensors can allow the accurate estimation of critical tree canopy parameters, so far a comparison of both techniques is missing. Point clouds derived from these sources vary according to differences in data collection and processing, a detailed comparison of point clouds in terms of accuracy and completeness, in relation to tree canopy parameters using point clouds is necessary. In this research, point clouds produced by UAV-photogrammetry and-LiDAR over an urban park along with the estimated tree canopy parameters are compared, and results are presented. The results show that UAV-photogrammetry and-LiDAR point clouds are highly correlated with R 2 of 99.54% and the estimated tree canopy parameters are correlated with R 2 of higher than 95%.
... Furthermore, LiDAR data only provide the building or roof top sparse depth information. It influences the precise identification and measure, as the exact corners of building construction usually do not contain collected points [10]. ...
... At the end, each region is classified by a set of category specific linear Support Vector Machines (SVMs). The schematic representation of described architecture, named Region-based CNN (R-CNN), is illustrated in Figure 2. 10. ...
... As it was mentioned in Section 4.3, for quantitative evaluation a small area of around 0.5 km 2 was selected (see Figure 4.9). The predicted results and ground-truth of this area are presented in Figure 4. 10. ...
Thesis
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement.
... Surface differencing is used to define the potential change locations, followed by a more accurate post-classification to recognize the specific types of changes (Lu et al., 2004). Basgall et al. (2014) compared laser points and dense matching points with the CloudCompare software. Single building changes were detected by visual inspection. ...
Article
Full-text available
Airborne photogrammetry and airborne laser scanning are two commonly used technologies used for topographical data acquisition at the city level. Change detection between airborne laser scanning data and photogrammetric data is challenging since the two point clouds show different characteristics. After comparing the two types of point clouds, this paper proposes a feed-forward Convolutional Neural Network (CNN) to detect building changes between them. The motivation from an application point of view is that the multimodal point clouds might be available for different epochs. Our method contains three steps: First, the point clouds and orthoimages are converted to raster images. Second, square patches are cropped from raster images and then fed into CNN for change detection. Finally, the original change map is post-processed with a simple connected component analysis. Experimental results show that the patch-based recall rate reaches 0.8146 and the precision rate reaches 0.7632. Object-based evaluation shows that 74 out of 86 building changes are correctly detected.
... Les techniques de cartographie par photogrammétrie et acquisition LiDAR présentent différents avantages et inconvénients comme détaillé dans [7] et [9] : Observer à travers la cannopée : Les acquisitions LiDAR ont la possibilité de pénétrer les forêts denses [10]. Cela permet au LiDAR de cartographier avec une précision élevée la topographie du relief terrestre. ...
... Photogrammetry vs Lidar Photogrammetry and Lidar mapping techniques present different benefits and drawbacks as detailed in [7] and [9]: ...
Thesis
This thesis studies the computer vision problem of image registration in the context of geological remote sensing surveys. More precisely we dispose in this work of two images picturing the same geographical scene but acquired from two different view points and possibly at a different time. The task of registration is to associate to each pixel of the first image its counterpart in the second image.While this problem is relatively easy for human-beings, it remains an open problem to solve it with a computer. Numerous approaches to address this task have been proposed. The most promising techniques formulate the task as a numerical optimization problem. Unfortunately, the number of unknowns along with the nature of the objective function make the optimization problem extremely difficult to solve. This thesis investigates two approaches along with a coarsening scheme to solve the underlying numerical problem
... This paper only describes the NPS Remote Sensing Center research effort and selected results from that research. The full project included many other aspects, some of which are summarized in other papers presented at SPIE Defense and Security 2014 [22][23][24][25]. Partners in the research who contributed significantly to the overall project include the NPS Virtualization and Cloud Computing Lab, the NPS Hastily Formed Networks (HFN) group, the NPS CORE Lab, the NPS Center for Asymmetric Warfare (CAW), NOAA/NGDC, and San Diego State University. ...
Conference Paper
Full-text available
The Naval Postgraduate School (NPS) Remote Sensing Center (RSC) and research partners have completed a remote sensing pilot project in support of California post-earthquake-event emergency response. The project goals were to dovetail emergency management requirements with remote sensing capabilities to develop prototype map products for improved earthquake response. NPS coordinated with emergency management services and first responders to compile information about essential elements of information (EEI) requirements. A wide variety of remote sensing datasets including multispectral imagery (MSI), hyperspectral imagery (HSI), and LiDAR were assembled by NPS for the purpose of building imagery baseline data; and to demonstrate the use of remote sensing to derive ground surface information for use in planning, conducting, and monitoring post-earthquake emergency response. Worldview-2 data were converted to reflectance, orthorectified, and mosaicked for most of Monterey County; CA. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired at two spatial resolutions were atmospherically corrected and analyzed in conjunction with the MSI data. LiDAR data at point densities from 1.4 pts/m2 to over 40 points/ m2 were analyzed to determine digital surface models. The multimodal data were then used to develop change detection approaches and products and other supporting information. Analysis results from these data along with other geographic information were used to identify and generate multi-tiered products tied to the level of post-event communications infrastructure (internet access + cell, cell only, no internet/cell). Technology transfer of these capabilities to local and state emergency response organizations gives emergency responders new tools in support of post-disaster operational scenarios.