Figure 2 - uploaded by François Goulette
Content may be subject to copyright.
Local Descriptor gi  

Local Descriptor gi  

Source publication
Article
Full-text available
This article addresses the problem of denoising 3D data from LIDAR. It is a step often required to allow a good reconstruction of surfaces represented by point clouds. In this paper, we present an original algorithm inspired by a recent method developed by (Buades and Morel, 2005) in the field of image processing, the Non Local Denoising (NLD). Wit...

Context in source publication

Context 1
... explained in (Pauly et al., 2002), eigenvalues rep- resent the configuration of the point around the plane Hi and if the local neighborhood distribution of two points pi and pj is the same with a certain rotation, we know that gi and gj will be iden- tical in their local frame Fi and Fj. The Figure 2 shows for a red point pi the plane Hi in gray, the barycenter bi in blue, the bilateral polynomial gi in black and the denoised surface in green. ...

Similar publications

Article
Full-text available
Today, creating or maintaining forest structural complexity is a management paradigm in many countries due to the positive relationships between structural complexity and several forest functions and services. In this study, we tested whether the box-dimension (Db), a holistic and objective measure to describe the structural complexity of trees or...
Article
Full-text available
The climatologies of the stratopause height and temperature in the UA‐ICON model are examined by comparing them to 17‐years (2005–2021) of Microwave Limb Sounder (MLS) observations. In addition, the elevated stratopause (ES) event occurrence, their main characteristics, and driving mechanisms in the UA‐ICON model are examined using three 30‐year ti...
Article
Full-text available
The Metrology Light Source (MLS), situated in Berlin (Germany) is owned by the Physikalisch-Technische Bundesanstalt and was built/is operated by the Helmholtz-Zentrum Berlin. It is an electron storage ring operating from 105 MeV to 630 MeV. The MLS serves as the national primary source standard from the near infrared to the extreme ultraviolet spe...
Article
Full-text available
We present ozone measurements made using state-of-the-art ultraviolet photometers onboard three long-duration stratospheric balloons launched as part of the Concordiasi campaign in austral spring 2010. Ozone loss rates calculated by matching air parcels sampled at different times and places during the polar spring are in agreement with rates previo...
Article
Full-text available
Nitrous oxide (N2O) is a potent and long-lived greenhouse gas that contributes to global warming with a global warming potential (GWP) 298 times that of carbon dioxide (CO2). In this paper, we analyzed the trend of N2O concentration in vertical layers of the stratosphere from 2005 to 2020 using the N2O observed from the Microwave Limb Sounder (MLS)...

Citations

... At present, 3D point cloud denoising methods can be broadly categorized as moving least squarebased, local optimal projection-based, sparsity-based, graphbased and non-local-based. [21]. It is based on a local geometric descriptor and looked for similar points to reduce noise. ...
Article
Full-text available
Higher-Order Singular Value Decomposition (HOSVD) is an effective method for point cloud denoising, however how to preserve the global and local structure during the denoising process and how to balance the denoising performance and its inherent large computational burden are still open questions in the field. To tackle these problems, an adaptive higher-order singular value decomposition method, AdaHOSVD including two sub-algorithms HOSVD-1 and HOSVD-2, is proposed in this work by adaptively setting the threshold value to truncate the kernel tensor, and by limiting the patch similarity searching within a search radius. Since point cloud is in 3D space rather than a 2D plane as in image cases, we extend the patch similarity detection in 3D space up to a 3D rigid motion; hence, more similar 3D patches could be detected, which in turn boosts its performance. We validate our method on two datasets. One is the 3D benchmark dataset including the ShapeNetCore and the 3D scanning repository of Stanford University, which contains a large body of diverse high quality shapes to assess its noise sensitivity, and the other is the Golden Temple and the Electric hook, which contains a large temple structure with abundant local repeated textural and shape patterns.
... Inspired by image denoising, researchers have also investigated use of non-local data in point cloud denoising. Non-local-based point cloud filtering methods [24][25][26][27] often incorporate normal information and use various similarity definitions to update point positions in a non-local manner. Thus, Ref. [24] proposes a similarity descriptor for point cloud patches based on MLS surfaces. ...
... Non-local-based point cloud filtering methods [24][25][26][27] often incorporate normal information and use various similarity definitions to update point positions in a non-local manner. Thus, Ref. [24] proposes a similarity descriptor for point cloud patches based on MLS surfaces. Ref. [25] designs a height vector field to describe the difference between the neighborhood of the point with neighborhoods of other points on the surface. ...
Article
Full-text available
While a popular representation of 3D data, point clouds may contain noise and need filtering before use. Existing point cloud filtering methods either cannot preserve sharp features or result in uneven point distributions in the filtered output. To address this problem, this paper introduces a point cloud filtering method that considers both point distribution and feature preservation during filtering. The key idea is to incorporate a repulsion term with a data term in energy minimization. The repulsion term is responsible for the point distribution, while the data term aims to approximate the noisy surfaces while preserving geometric features. This method is capable of handling models with fine-scale features and sharp features. Extensive experiments show that our method quickly yields good results with relatively uniform point distribution.
... The non-local self-similarity methods proposed for images [10,16] were extended to denoising of point clouds. Different patch representations are used, such as polynomial surfaces [17], variations in height fields [18], local displacements [28,59], local probing fields [19], point normals [46], and sampled collaborative points [56], which exploit the similarity between non-local surfaces. ...
Article
Full-text available
3D point cloud denoising is a fundamental task in a geometry-processing pipeline, where feature preservation is essential for various applications. The literature presents several methods to overcome the denoising problem; however, most of them focus on denoising smooth surfaces and not on handling sharp features correctly. This paper proposes a new sharp feature-preserving method for point cloud denoising that incorporates solutions for normal estimation and feature detection. The denoising method consists of four major steps. First, we compute the per-point anisotropic neighborhoods by solving local quadratic optimization problems that penalize normal variation. Second, we estimate a piecewise smooth normal field that enhances sharp feature regions using these anisotropic neighborhoods. This step includes bilateral filtering and a novel corrector procedure to obtain more reliable normals for the subsequent steps. Third, we employ a novel sharp feature detection algorithm to select the feature points precisely. Finally, we update the point positions to fit them to the computed normals while retaining the sharp features that were detected. These steps are repeated until the noise is minimized. We evaluate our method using qualitative and quantitative comparisons with state-of-the-art denoising, normal estimation, and feature detection procedures. Our experiments show that our approach is competitive and, in most test cases, outperforms all other methods.
... Meanwhile, the FPFH is 33, the SHOT is 352, and the RoPS is 135. Although FPFH has only 33 dimensions, the extraction time is high because SPFH is calculated twice [47]. ...
Article
Full-text available
Point cloud registration (PCR) is a vital problem in remote sensing and computer vision, which has various important applications, such as 3D reconstruction, object recognition, and simultaneous localization and mapping (SLAM). Although scholars have investigated a variety of methods for PCR, the applications have been limited by low accuracy, high memory footprint, and slow speed, especially for dealing with a large number of point cloud data. To solve these problems, a novel local descriptor is proposed for efficient PCR. We formed a comprehensive description of local geometries with their statistical properties on a normal angle, dot product of query point normal and vector from the point to its neighborhood point, the distance between the query point and its neighborhood point, and curvature variation. Sub-features in descriptors were low-dimensional and computationally efficient. Moreover, we applied the optimized sample consensus (OSAC) algorithm to iteratively estimate the optimum transformation from point correspondences. OSAC is robust and practical for matching highly self-similar features. Experiments and comparisons with the commonly used descriptor were conducted on several synthetic datasets and our real scanned bridge data. The result of the simulation experiments showed that the rotation angle error was below 0.025° and the translation error was below 0.0035 m. The real dataset was terrestrial laser scanning (TLS) data of Sujiaba Bridge in Chongqing, China. The results showed the proposed descriptor successfully registered the practical TLS data with the smallest errors. The experiments demonstrate that the proposed method is fast with high alignment accuracy and achieves a better performance than previous commonly used methods.
... With the help of non-local similarities, these methods are able to reduce noise while preserving the surface details. To calculate the point similarity better, [25] describes the neighborhood of each point by the polynomial coefficients of the local MLS surface. [26] proposed to smooth point clouds by solving a structured low-rank matrix factorization problem, where a lowrank dictionary representation of patches is optimized. ...
Preprint
3D dynamic point clouds provide a discrete representation of real-world objects or scenes in motion, which have been widely applied in immersive telepresence, autonomous driving, surveillance, etc. However, point clouds acquired from sensors are usually perturbed by noise, which affects downstream tasks such as surface reconstruction and analysis. Although many efforts have been made for static point cloud denoising, dynamic point cloud denoising remains under-explored. In this paper, we propose a novel gradient-field-based dynamic point cloud denoising method, exploiting the temporal correspondence via the estimation of gradient fields -- a fundamental problem in dynamic point cloud processing and analysis. The gradient field is the gradient of the log-probability function of the noisy point cloud, based on which we perform gradient ascent so as to converge each point to the underlying clean surface. We estimate the gradient of each surface patch and exploit the temporal correspondence, where the temporally corresponding patches are searched leveraging on rigid motion in classical mechanics. In particular, we treat each patch as a rigid object, which moves in the gradient field of an adjacent frame via force until reaching a balanced state, i.e., when the sum of gradients over the patch reaches 0. Since the gradient would be smaller when the point is closer to the underlying surface, the balanced patch would fit the underlying surface well, thus leading to the temporal correspondence. Finally, the position of each point in the patch is updated along the direction of the gradient averaged from corresponding patches in adjacent frames. Experimental results demonstrate that the proposed model outperforms state-of-the-art methods under both synthetic noise and simulated real-world noise.
... At present, there are two kinds of denoising methods for point cloud data: local methods [14,[17][18][19][20][21][22][23][24][25] and non-local methods [13,15,16,[26][27][28]. The former involves the denoising of the point cloud based on the neighborhood of points, while the latter involves identifying similar point patches from the local neighborhood and combining this group of patches for denoising. ...
... However, this was found to depend on the self-similarity among the surface blocks in the point cloud, and required a large amount of calculation. Deschaud et al. [28] used the polynomial coefficients of local MLS surfaces as neighborhood descriptors to calculate the similarity of points, and then proposed a non-local denoising (NLD) algorithm. Zeng et al. [13] proposed the GLR denoising algorithm to seek self-similar patches to denoise the point cloud. ...
Article
Full-text available
A point cloud obtained by stereo matching algorithm or three-dimensional (3D) scanner generally contains much complex noise, which will affect the accuracy of subsequent surface reconstruction or visualization processing. To eliminate the complex noise, a new regularization algorithm for denoising was proposed. In view of the fact that 3D point clouds have low-dimensional structures, a statistical low-dimensional manifold (SLDM) model was established. By regularizing its dimensions, the denoising problem of the point cloud was expressed as an optimization problem based on the geometric constraints of the regularization term of the manifold. A low-dimensional smooth manifold model was constructed by discrete sampling, and solved by means of a statistical method and an alternating iterative method. The performance of the denoising algorithm was quantitatively evaluated from three aspects, i.e., the signal-to-noise ratio (SNR), mean square error (MSE) and structural similarity (SSIM). Analysis and comparison of performance showed that compared with the algebraic point-set surface (APSS), non-local denoising (NLD) and feature graph learning (FGL) algorithms, the mean SNR of the point cloud denoised using the proposed method increased by 1.22 DB, 1.81 DB and 1.20 DB, respectively, its mean MSE decreased by 0.096, 0.086 and 0.076, respectively, and its mean SSIM decreased by 0.023, 0.022 and 0.020, respectively, which shows that the proposed method is more effective in eliminating Gaussian noise and Laplace noise in common point clouds. The application cases showed that the proposed algorithm can retain the geometric feature information of point clouds while eliminating complex noise.
... Despite decades of research, point cloud denoising remains a challenging problem, because of the intrinsic complexity of the topological relationship and connectivity among points. Traditional denoising methods [2,29,34,4,8] perform well in some circumstances. However, they generally rely on prior knowledge on point sets or some assumptions on noise distributions, and they may compromise the denoising quality for unseen noise (e.g., distortion, non-uniformity). ...
... 3) Neighborhood-based filtering methods measure the correlation and similarity between a point and its neighbor points. Nonlocal-based methods [8,21,47,53] generally detect selfsimilarity among nonlocal patches and consolidate them into coherent noise-free point clouds. Graph-based denoising methods [14,18,44,52] naturally represent point cloud geometry with a graph. ...
Preprint
Point cloud denoising aims to restore clean point clouds from raw observations corrupted by noise and outliers while preserving the fine-grained details. We present a novel deep learning-based denoising model, that incorporates normalizing flows and noise disentanglement techniques to achieve high denoising accuracy. Unlike existing works that extract features of point clouds for point-wise correction, we formulate the denoising process from the perspective of distribution learning and feature disentanglement. By considering noisy point clouds as a joint distribution of clean points and noise, the denoised results can be derived from disentangling the noise counterpart from latent point representation, and the mapping between Euclidean and latent spaces is modeled by normalizing flows. We evaluate our method on synthesized 3D models and real-world datasets with various noise settings. Qualitative and quantitative results show that our method outperforms previous state-of-the-art deep learning-based approaches. %in terms of detail preservation and distribution uniformity.
... Inspired by image denoising, researchers have also investigated the nonlocal aspects of point cloud denoising. The nonlocal-based point cloud filtering methods [3,4,36,2] often incorporated normal information and designed different similarity descriptions to update point positions in a nonlocal manner. Among them, [3] proposed a similar-ity descriptor for point cloud patches based on MLS surfaces. ...
... The nonlocal-based point cloud filtering methods [3,4,36,2] often incorporated normal information and designed different similarity descriptions to update point positions in a nonlocal manner. Among them, [3] proposed a similar-ity descriptor for point cloud patches based on MLS surfaces. [4] designed a height vector field to describe the difference between the neighborhood of the point with neighborhoods of other points on the surface. ...
Preprint
Full-text available
As a popular representation of 3D data, point cloud may contain noise and need to be filtered before use. Existing point cloud filtering methods either cannot preserve sharp features or result in uneven point distribution in the filtered output. To address this problem, this paper introduces a point cloud filtering method that considers both point distribution and feature preservation during filtering. The key idea is to incorporate a repulsion term with a data term in energy minimization. The repulsion term is responsible for the point distribution, while the data term is to approximate the noisy surfaces while preserving the geometric features. This method is capable of handling models with fine-scale features and sharp features. Extensive experiments show that our method yields better results with a more uniform point distribution ($5.8\times10^{-5}$ Chamfer Distance on average) in seconds.
... Non-local means, dictionary-based methods Another very prominent category of methods, inspired in part from image-based techniques consist in using non-local filtering based most often on detecting similar shape parts (patches) and consolidating them into a coherent noise-free point cloud [64,239,70,71,234]. Closely related are also methods, based on constructing "dictionaries" of shapes and their parts, which can then be used for denoising and point cloud filtering, e.g., [231,74] (see also a recent survey of dictionary-based methods [146]). ...
Thesis
Efficiently processing and analysing 3D data is a crucial challenge in modern applications as 3D shapes are becoming more and more widespread with the proliferation of acquisition devices and modeling tools. While successes of 2D deep learning have become commonplace and surround our daily life, applications that involve 3D data are lagging behind. Due to the more complex non-uniform structure of 3D shapes, successful methods from 2D deep learning cannot be easily extended and there is a strong demand for novel approaches that can both exploit and enable learning using geometric structure. Moreover, being able to handle the various existing representations of 3D shapes such as point clouds and meshes, as well as the artefacts produced from 3D acquisition devices increases the difficulty of the task. In this thesis, we propose systematic approaches that fully exploit geometric information of 3D data in deep learning architectures. We contribute to point cloud denoising, shape interpolation and shape reconstruction methods. We observe that deep learning architectures facilitate learning the underlying surface structure on point clouds that can then be used for denoising as well as shape interpolation. Encoding local patch-based learned priors, as well as complementary geometric information such as edge lengths, leads to powerful pipelines that generate realistic shapes. The key common thread throughout our contributions is facilitating seamless conversion between different representations of shapes. In particular, while using deep learning on triangle meshes is highly challenging due to their combinatorial nature we introduce methods inspired from geometry processing that enable the creation and manipulation of triangle faces. Our methods are robust and generalize well to unseen data despite limited training sets. Our work, therefore, paves the way towards more general, robust and universally useful manipulation of 3D data.
... Denoising and outlier detection in a dataset depends on the point distribution and density of a point cloud. Denoising algorithms are described as either preprocessing a laser point cloud before reconstruction or post-treatment directly on meshes [1]. For example, a mesh denoising algorithm was proposed by [2] based on smooth surfaces, including filtering vertices in the normal direction using local neighborhoods as an adaptation of bilateral LiDAR data and stereo images to develop an accurate 3D scene. ...
Article
Full-text available
High-resolution point cloud data acquired with a laser scanner from any platform contain random noise and outliers. Therefore, outlier detection in LiDAR data is often necessary prior to analysis. Applications in agriculture are particularly challenging, as there is typically no prior knowledge of the statistical distribution of points, plant complexity, and local point densities, which are crop-dependent. The goals of this study were first to investigate approaches to minimize the impact of outliers on LiDAR acquired over agricultural row crops, and specifically for sorghum and maize breeding experiments, by an unmanned aerial vehicle (UAV) and a wheel-based ground platform; second, to evaluate the impact of existing outliers in the datasets on leaf area index (LAI) prediction using LiDAR data. Two methods were investigated to detect and remove the outliers from the plant datasets. The first was based on surface fitting to noisy point cloud data via normal and curvature estimation in a local neighborhood. The second utilized the PointCleanNet deep learning framework. Both methods were applied to individual plants and field-based datasets. To evaluate the method, an F-score was calculated for synthetic data in the controlled conditions, and LAI, the variable being predicted, was computed both before and after outlier removal for both scenarios. Results indicate that the deep learning method for outlier detection is more robust than the geometric approach to changes in point densities, level of noise, and shapes. The prediction of LAI was also improved for the wheel-based vehicle data based on the coefficient of determination (R2) and the root mean squared error (RMSE) of the residuals before and after the removal of outliers.