ArticlePDF Available

Modeling of 3D geometry uncertainty in Scan-to-BIM automatic indoor reconstruction

Authors:
Automation in Construction 154 (2023) 105002
Available online 27 June 2023
0926-5805/© 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
Modeling of 3D geometry uncertainty in Scan-to-BIM automatic
indoor reconstruction
M. Jarząbek-Rychard
a
,
b
,
*
, H.-G. Maas
b
a
Institute of Geodesy and Geoinformatics, Wroclaw University of Environmental and Life Sciences, Poland
b
Institute of Photogrammetry and Remote Sensing, Technische Universit¨
at Dresden, Germany
ARTICLE INFO
Keywords:
Error propagation
BIM
Uncertainty modeling
Indoor 3D models
Building reconstruction
Accuracy
ABSTRACT
The rapidly expanding eld of Scan-to-BIM applications highlights the importance of model uncertainty
assessment in describing the quality of modeling results. Although there have been recent research advancements
in point cloud-based building modeling, there has been limited investigation into accurately analyzing error
propagation. This paper estimates the geometry uncertainty in 3D modeling based on a strict application of
geodetic stochastic modeling. Statistical uncertainty is incorporated into the building reconstruction process and
procedures that enable self-verication within this process are developed. The method can be successfully used
to evaluate the dimensional uncertainty of generated BIMs, which is especially important in the eld of civil
engineering with high accuracy requirements concerning metric quality control. Follow-up research will also
consider systematic errors and apply the methods to other 3D point cloud acquisition techniques.
1. Introduction
Building Information Models (BIMs) are digital representations of
urban facilities that record all data relevant to a building lifecycle,
supporting information creation and exchange [1]. They are widely used
in many applications, such as building maintenance and inspection,
preservation, or emergency response. Accurate digital models of as-built
building interiors provide key information for the maintenance and
adaptation of existing facilities. Effective decision support is feasible
only when accurate, up-to-date, and complete as-built information
about existing buildings is available. At present, however, BIMs often do
not exist for many buildings, or they represent the ‘as-designedcondi-
tion of the object, which may not accurately reect the current facility
state because of differences in the construction and periodical
renovations.
Traditional ways of collecting as-built information, such as range
nders or total stations, are tedious and time-consuming tasks, espe-
cially in complex facilities. In the last decade, increasing relevance was
gained by Scan-to-BIM approaches, that provide up-to-date 3D point
clouds which serve as input data for the generation of 3D indoor models
[2]. Thanks to advances in technology, Scan-to-BIM has become an
attractive alternative to the traditional techniques of data acquisition,
capable of signicantly reducing project costs [3]. A growing number of
construction practitioners start to use BIMs, noticing their potential as
parametric representations and repositories of accurate as-built infor-
mation. Facilitated data collection triggered the need for 3D recon-
struction methods, with the possibly highest degree of automation. So
far, 3D models are often generated for the purpose of architectural and
visual representation, which does not fulll many requirements of more
advanced applications [4]. In the eld of civil engineering, with high
accuracy requirements concerning metric quality control in the range of
millimeters, such models are often insufcient. Especially, modeling of
building interiors still remains an open challenge, due to environmental
constraints causing weak scanning geometries, high amount of clutter,
and occlusion [5]. At the same time, there is a great need for a thorough
evaluation of indoor as-built 3D models, which impose high geometric
resolution and reliably conrmed accuracy.
The digital representation of the real world in scan-to-BIM applica-
tions is inherently uncertain. During the modeling phase, unavoidable
uncertainties of the measured input data propagate into the accuracy of
the generated 3D model. Therefore, the compliant building models
cannot be considered error-free. It is also important to recognize, that
the accuracy level of measurement results may signicantly differ from
the obtained accuracy level of the modeling product. Processing the data
into a model may lead to the cumulation of errors, but on the other hand,
redundancy in the measurements will often improve the nal quality of
* Corresponding author at: Institute of Geodesy and Geoinformatics, Wroclaw University of Environmental and Life Sciences, Poland.
E-mail addresses: malgorzata.jarzabek-rychard@upwr.edu.pl (M. Jarząbek-Rychard), hans-gerd.maas@tu-dresden.de (H.-G. Maas).
Contents lists available at ScienceDirect
Automation in Construction
journal homepage: www.elsevier.com/locate/autcon
https://doi.org/10.1016/j.autcon.2023.105002
Received 7 March 2023; Received in revised form 25 May 2023; Accepted 17 June 2023
Automation in Construction 154 (2023) 105002
2
the output. The intensively growing scope of 3D model applications, far
beyond visualization purposes, leads to the increasing relevance of un-
certainty assessment, which indicates the quality of the reconstruction
results. Inaccurate 3D models can negatively affect the application to
which they are applied, like the reliability of risk monitoring or preci-
sion of the reference model. Procedures to estimate the uncertainties
associated with a measurement system and modeling method are espe-
cially needed in the design phase of the Scan-to-BIM process. Such a
priori investigation allows for the prediction of the achievable accuracy
of the data processing pipeline which is in development [6]. Further-
more, the estimation procedures may provide knowledge about causes
of errors and modeling sub-processes that are sensitive to uncertainties.
Each step of the comprehensive Scan-to-BIM process is associated
with inaccuracies that affect the nal result. Error propagation starts
already from the positioning of the data collection device. Also the
choice of the ranging technique, calibration, and environmental
conditions during the acquisition process are the inuential factors.
Moreover, multi-scan registration error effects propagate into the ac-
curacy of the acquired 3D points. So far, most research has been focused
on the impact of sensor calibration and point cloud registration [7].
Therefore we focus on the nal step of the error propagation chain,
which leads from the 3D input data to the generated as-built model.
Although each aforementioned factor contributes to the accuracy of the
nal 3D model, most scientic investigation dedicated to the impact on
model accuracy and precision of the modeling phase is barely covered.
While there are numerous reconstruction methods proposed in the
literature, they still lack a thorough investigation of metric tolerances in
engineering practice. Estimation of the reliability level of as-built con-
ditions is especially important for analyzable building models, which
has to ensure that planned neighboring elements will t together after
installation on site. Available methods for the quality assessment of the
reconstructed 3D models are mostly limited to the comparison of the
nal product against input data or reference models, e.g. [89], without
addressing the impact of the modeling process itself. Considering that
the majority of efforts in the eld of Scan-to-BIM applications are
associated with the reconstruction part [1012], explicit analysis of
error propagation for this problem is clearly underrepresented.
The aim of this research is to create a framework for establishing a
relationship between the quality of collected 3D point cloud data and the
quality of the generated model. In the contribution, we propose an
analytical approach for the modeling of 3D geometry uncertainty, based
Fig. 1. Overall owchart of the presented uncertainty estimation methodology.
Fig. 2. Input 3D point cloud (left) and the reconstructed 3D indoor model (right).
Table 1
Level of Accuracy structure denition ([48], specied at 95% condence level).
Level Upper Range Lower Range
LOA10 15 cm 5 cm
LOA20 5 cm 15 mm
LOA30 15 mm 5 mm
LOA40 5 mm 1 mm
LOA50 1 mm 0
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
3
on the rst-order Taylor-series expansion. The study focuses on the
propagation of uncertainties in tting objects to 3D points on the posi-
tional accuracy of the reconstructed building elements. The new insight
of the research comes from considering statistical uncertainties in the
building reconstruction process and developing a procedure that allows
this process to verify itself. We establish a general model for 3D data
tting, followed by the derivation of the symbolical uncertainty ex-
pressions of the reconstructed building elements. All investigations
consider the correlations between parameters. Furthermore, we
compute condence intervals to quantify the probability of the recon-
structed building elements to belong to certain tolerance thresholds. The
implemented procedure enables to integrate the results obtained in
terms of error propagation theory with the uncertainty information ac-
cording to the standardized LOA specication for BIM. The developed
theoretical formulae are numerically tested on a 3D point cloud pre-
senting an indoor scene in a university building. In the presented work
we are focusing on terrestrial laser scanning (TLS) as data collection
technique. Nevertheless, the developed solutions are also relevant for 3D
point cloud data acquired in an alternative way, for instance using multi-
image-based photogrammetric techniques or through 3D measurement
devices in smartphones. Finally, the presented results of the numerical
experiment are thoroughly analyzed, quantied, and illustrated.
The structure of this paper is as follows: state-of-the-art de-
velopments in the area of scan-to-BIM are described in Section 2. Section
3 presents the methodology of 3D geometry uncertainty modeling. The
impact of the reconstruction process on the geometric accuracy of
generated building models is presented and evaluated in Section 4.
Conclusions are summarized in Section 5.
2. Related work
The use of scan-to-BIM techniques to support building design and
construction practices becomes a global standard [13]. The process can
be divided into three main steps: (i) data collection and registration, (ii)
3D modeling, and (iii) performance evaluation. Such a frameworks has
been adopted as a standard procedure for the generation as-built-BIMs
for various purposes, like e.g. indoor navigation [14], emergency
response [15], change detection [16], or quality control [17]. Depend-
ing on the target application, the requirements for the generated model
vary with respect to the geometric accuracy, level of detail, and ef-
ciency of the whole process.
2.1. Data collection and registration
Terrestrial laser scanning techniques are available to quickly collect
up-to-date 3D point clouds that can serve as a representation of building
interiors. Although constraint indoor conditions affect the quality of TLS
data, their degrading impact is smaller as compared to most image-
based techniques. Densely collected 3D point clouds enable to present
complex geometry, required in construction and civil engineering pro-
jects [18]. As with each measurement, TLS data acquisition is charac-
terized by a certain level of accuracy. Errors in data collection propagate
through the data processing chain and lead to statistical uncertainty in
the nal results. A priori knowledge of the stochastic properties of the
measurements is therefore important to compute statistically signicant
geometric attributes of the reconstructed models. Typical errors are
connected to the scanning device and the quality of point cloud regis-
tration. Complex topics related to errors in TLS have been comprehen-
sively investigated in numerous publications. Error sources are explored
together with related inuencing parameters and their effects on the
geometric quality of collected 3D point clouds in [19,20]. Stochastic
models based on intensity information are presented by [21,22]. [23]
investigates the magnitude of systematic deviations and estimates
measurement uncertainty for the purpose of geodetic applications and
deformation analyses. [24,25] model systematic errors and propose
calibration strategies to reduce their impact.
2.2. 3D modeling
Scan-to-BIM modeling practice is dominated by manual and semi-
manual 3D reconstruction techniques, conducted in BIM authoring
software. Automation of 3D reconstruction of as-built models of indoor
environments has been addressed in recent years by numerous re-
searchers. Nevertheless, fully automated modeling of arbitrary complex
building interiors based on 3D point clouds collected by laser scanners
still remains an open challenge [12]. A typical difculty for indoor
scenes is a high amount of clutter and occlusion. Indoor spaces are
usually lled with various furniture and other objects, which are dif-
cult to discriminate from building surfaces [26]. Finally, a core reason
that makes the process of indoor modeling challenging, is the variety
and complexity of the structure of building interiors, which requires
generic and exible reconstruction algorithms. Due to the aforemen-
tioned challenges, many approaches were limited to single-room
modeling or clutter-free environment, e.g. [27,28]. With the rapid
advancement in 3D point cloud acquisition techniques, the need for as-
built modeling of a large and complex environment with multiple rooms
has come to the fore (e.g. [29,30]). As a consequence of targeting visual
applications, most of the reconstruction methods presented in the
literature use a surface-based representation of generated objects, e.g.
[31,32]. Alternatively, structural elements are represented by volu-
metric representations [33,34], which facilitate the extraction of model
topology. Semantic enrichment of existing model geometry by indoor
Fig. 3. Exemplary data part (a), direct input to the error propagation analysis data segmented into planar patches, with plane IDs (b), corresponding vectorised
indoor model, with edge IDs (c).
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
4
objects detected from complementary data is presented in [35]. Since
point clouds acquired in building interiors typically feature numerous
data gaps, the increasing attention in indoor reconstruction is recently
gained by 3D reconstruction based on incomplete data [36,37].
2.3. Performance evaluation
2.3.1. Assessment of 3D model accuracy
Regardless of the capabilities of data acquisition and 3D recon-
struction methods, the complex indoor environment causes outliers,
occlusions, data gaps, and uneven point distribution. Consequently,
modeling of 3D geometry based on the collected data is a bottleneck of
the whole scan-to-BIM process and contributes signicantly to the ac-
curacy of the nal result [38]. The extensive research on automated
solutions for the generation of as-built models is, however, not accom-
panied by the development of corresponding evaluation methods. Uni-
ed procedures for comparison and quality evaluation are still hardly
available. Quality assessment of 3D building models is often based on
visual inspection, which is subjective and time-consuming [3941], or
on the comparison with a ground truth reference [4244]. Besides ac-
curacy, the assessment of the geometric quality of a 3D indoor model
executed by object comparison may also take into consideration two
other aspects: completeness and correctness, which indicates to what
extent the reconstructed elements overlay with the reference data. The
accuracy aspect expresses how geometrically close the reconstructed
elements are to the compared objects, assuming that the underlying
geometric model is correct. Accuracy metrics used for this purpose are,
however, not standardized and they are interpreted in various ways. In
[8] accuracy is dened based on Euclidean distance between the 3D
points sampled densely from the reference model and the closest surface
in the reconstructed model. Furthermore, in [45], point-to-plane dis-
tances larger than a preset maximum value are excluded from the
evaluation to avoid the inuence of outliers. Besides a reference model,
the reconstructed object can be also compared with the actual input data
that is based on. The evaluation presented in [46] is calculated using
Chamfer distance between the 3D model and the corresponding 3D
points. The evaluation of the generated BIM model presented in [47] is
performed by comparison with the ground truth IFC model. The
experiment evaluates model recall, precision, and geometric accuracy
based on the approximated distances from the point to the nearest
triangulated surface on the reconstructed model.
Very often, the comparison techniques are not validated based on a
common benchmark and provide a biased view of the quality of the
generated model. One of the rare examples of establishing common
standards and consistency in documenting existing conditions is the
Table 2
Plane tting quality, parameters without units are dimension-less (single indoor space, c.f. Fig.3).
Plane ID Number of points Wall size [m
2
] Data coverage [pts/m
2
] Normalized vector v D Plane slope (vert.) [deg] V error D error Slope error [deg] Plane-point mean dist.[m]
A B C
1 3237 5,1 636 0,0191 0,0069 0,9998 10,524 11,612 0,00006 0,0002 0,0003 0,0037
2 4807 5,1 944 0,0252 0,0109 0,9996 24,118 15,761 0,00008 0,0003 0,0003 0,0032
3 8226 9,0 910 0,4577 0,8889 0,0184 10,687 91,0529 0,00003 0,0001 0,0009 0,0034
4 745 6,8 110 0,8868 0,4596 0,0491 41,834 87,1884 0,00150 0,0030 0,0772 0,0049
5 2003 9,0 222 0,4488 0,8934 0,0201 0,8503 88,8491 0,00012 0,0003 0,0063 0,0029
6 2828 6,8 413 0,8891 0,4571 0,0232 67,937 88,6702 0,00020 0,0002 0,0066 0,0072
Fig. 4. Dependency between the average plane-point distance and the uncer-
tainty of plane slope (no correlation).
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
5
Level of Accuracy (LOA) Specication [48] provided by the U.S. Insti-
tute of Building Documentation (USIBD). The LOA constitutes a frame-
work for BIM-related industry to be used as guidance on the
representation, modeling, and verication of as-built information. The
specication refers to the acceptable tolerance range for data acquisi-
tion, as well as to the tolerance accuracy of the representation of these
data in the form of a 3D model. According to the USIBD, each individual
building component can be assigned to a certain accuracy level. The
specication denes ve geometric Levels of Accuracy in terms of
Standard Deviation (Table 1), using a 95% condence level which is
Table 3
Uncertainty of 3D model vertices (single indoor space c.f.Fig. 2).
Vertrex ID Intersection planes Average planesslope error [m] Number of points Mx [m] My [m] Mz [m] error - 3d space [m] Max displacement [m] upper 95% CI
Floor
1 1,3,4 0,0238 12,208 0,0040 0,0021 0,0001 0,0045 0,0088
2 1,4,5 0,0302 5985 0,0036 0,0018 0,0001 0,0041 0,0090
3 1,5,6 0,0094 8068 0,0003 0,0003 0,0001 0,0005 0,0009
4 1,6,3 0,0030 14,291 0,0002 0,0001 0,0001 0,0003 0,0005
Ceiling
5 2,3,4 0,0237 13,778 0,0005 0,0003 0,0001 0,0006 0,0011
6 2,4,5 0,0302 7555 0,0008 0,0004 0,0001 0,0009 0,0017
7 2,5,6 0,0094 9638 0,0002 0,0001 0,0001 0,0003 0,0005
8 2,6,3 0,0029 15,861 0,0002 0,0001 0,0001 0,0003 0,0006
Fig. 5. Dependency between the number of points used for plane estimation
and the uncertainty of plane slope.
0.000
0.001
0.002
0.003
0 2000 4000 6000 8000 10000
3D accuracy of model verces [m]
number of point used for the esmaon of vertex
coordinates
Fig. 6. Dependency between the number of points used for vertex estimation
and the vertex 3D accuracy.
Fig. 7. Positional uncertainty of the reconstructed model vertices, specied at
the 95% condence level.
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
6
commonly used for the estimation of positional uncertainty in
surveying.
2.3.2. Uncertainty in BIM
A common practice in the eld of 3D modeling is to represent
building objects in a geometrically generalized and orthogonal fashion.
Such practice works properly for as-designed BIM, which record the
building design and specications before construction begins. Since
real-world conditions are hardly ever ideally orthogonal, as-built BIM
regularity assumptions introduce errors. As-built models representing
building after construction often differ substantially from the as-
designed ones due to imprecision of the building process, reworks, un-
expected site conditions, and other changes to the original design made
during the construction process. As a consequence, real construction
elements rarely perfectly t the assigned parametric representation.
Furthermore, in scan-to-BIM, referred as the process of creating as-built
BIM from 3D point clouds, uncertainty in the digital representation is
additionally increased by the inherent noise of captured 3D point clouds.
Comparison-based methods for the accuracy assessment of the nal
modeling results assume perfect quality of the reference information.
Such an assumption very often does not hold true, especially for manual
reconstruction prone to subjective decisions and user experience [38].
Moreover, such methods cannot be used in a reverse way, to predict the
quality of the modeling process depending on the quality of the acquired
data, or even one step earlier - depending on the parameters of the
acquisition sensors and geometric conditions of the captured scene.
Therefore, uncertainty assessment is an important topic in the BIM-
related industry, indicating the conducted processs quality. Unlike
research focused on 3D reconstruction methods, this topic is, never-
theless, barely covered. Reasoning with uncertain geometry of objects is
presented in [49]. Uncertain 3D building models are matched with
thermal infrared images for high-quality extraction in [50]. The uncer-
tainty of the geometry of a given BIM is a key issue in applications that
requires comparison to the reference information, like for example
change-detection application and monitoring of construction progress
[16]. The overall accuracy of the nal results is then dependent not only
on the quality of the new objects being compared but in particular
also on the reliability of the given reference model.
3. Methodology
The presented research aims at a general framework for the inves-
tigation of uncertainties in 3D building modeling. Obviously, creation of
Fig. 8. Geometry uncertainty of the reconstructed indoor model, based on the
uncertainty of the model vertices specied at the 95% condence (partial
transparency of the corridor is applied for better visibility).
Fig. 9. Comparison of slope estimation for a different types of edges.
Table 4
Estimated inclination of 3D model edges to the horizontal plane, with corresponding uncertainties (single indoor space).
Edge type Edge ID Intersection planes Number of points Slope error [deg] Slope - condence intervals [deg]
95% CI
vertical edges
A 3,4 8971 0,0725 86,8534 87,1377
B 4,5 2748 0,0718 86,8083 87,0896
C 5,6 4831 0,0065 88,2460 88,2207
D 6,3 11,054 0,0051 88,3146 88,2944
horizontal edges - oor
E 1,3 11,463 0,0019 0,0019 0,0057
F 1,4 3982 0,0032 0,0338 0,0463
G 1,5 5240 0,0019 0,0008 0,0084
H 1,6 6065 0,0032 0,0015 0,0110
horizontal edges - ceiling
I 2,3 13,033 0,0030 0,0028 0,0088
J 2,4 5552 0,0038 0,0036 0,0112
K 2,5 6810 0,0030 0,0028 0,0087
L 2,6 7635 0,0038 0,0036 0,0111
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
7
a point cloud which is a sequence of measurement steps with individual
stochastic elements (e.g. investigated in [19,21,24]), contributes to the
whole chain of error propagation. As well, the generalization accuracy
plays an important role in describing the quality of a reconstructed 3D
model. However, in this paper we focus on a thorough analysis of ge-
ometry error propagation in the modeling process from a 3D point cloud
to a 3D vectorized model. An opening step towards this goal was pre-
sented in [51]. In this preliminary investigation, we focused on the
relationship between uncertainties of a subsequent step of the recon-
struction process, excluding the direct inuence of input data acquisi-
tion errors. In the methodological chain described below, we address
this issue and propose a comprehensive error propagation model for the
generation of a digital indoor scene. The presented analytical approach
is also enhanced by the estimation of the probability that the recon-
structed building elements t into certain accuracy tolerance, which
may also be used to verify hypotheses about its orthogonality and
regularization.
Although building structural components can be abstracted in 3D
space by various geometric objects, most of the presented in the litera-
ture methods for building reconstruction are plane-driven [12]. There-
fore, the presented methodological pipeline (Fig. 1) starts with the
investigation of error propagation in algorithms for tting planes to the
segmented 3D point cloud. Assuming that the uncertainty of the input
3D points is known (given by the manufacturer or estimated by the
accuracy investigation of the acquisition sensor) we estimate the error
propagation in each subsequent modeling step. All investigations
consider the correlations between parameters. The estimated tted pa-
rameters and their associating covariance matrices are used for the
subsequent derivation of the building elements and their accuracies. The
nal result of the proposed methodological pipeline contains analytical
expressions of the covariance matrices associated with the vertices and
edges of the generated building model. The developed models of the
building geometrical uncertainty are the basis for the following proba-
bility estimation. The error sensitivity of the implemented modeling
algorithm is then thoroughly analyzed and quantied.
3.1. Estimation of error propagation
The propagation of errors in computational processes can be inves-
tigated by the methods that estimate the uncertainty of the processing
results based on the input data and its associated uncertainty. This
estimation can be performed stochastically or analytically. The rst of
the two approaches, the Monte Carlo technique, is a numerical method
that works on simulated measurement data. Given some input infor-
mation, a large population of corrupted data is created by repeatedly
adding different random noises. This population is then processed and
statistically evaluated to determine the uncertainty of the process
output. Although its advantage, related to uncomplicated implementa-
tion, the Monte Carlo method provides a solution limited only to each
specic problem. If the data is changed in any way, the whole simulation
process has to be repeated. The second approach to uncertainty
modeling is analytical and uses Taylor-series expansion of the function
that describes the data processing step. To linearize inherently nonlinear
problems the Taylor-series expansion is truncated to the rst order. This
method offers benets of a closed-form solution, providing an analytic
expression of the uncertainty of the result. Since in the presented
research we aim at the establishment of a general framework for the
investigation of uncertainties in 3D building modeling, the methodology
presented below is focused on the analytical modeling of error
propagation.
Given an explicit continuously differentiable function f(x)that rep-
resents any operation on the input data and the covariance matrix Λx of
the input data described by x, we wish to compute the covariance matrix
Λy of the result y=f(x)[53]. To achieve this, we need the Taylor series
expansion of f(x)around the expected value x of x:
f(x+Δx) = f(x)+f(x)Δx) + O(‖Δx‖)2(3.1.1)
For the estimated vector y,the rst order approximation to the
covariance matrix Λy is thus given by:
Λy= fΛxT
f(3.1.2)
where f describes the Jacobian of the function, which is obtained for
explicit functions y=f(x)by computing partial derivatives:
f=
f(x)
x=
f1
x1
f1
xn
fn
x1
fn
xn
(3.1.3)
If the mathematical relation f between the input x and output y is
dened as an implicit function:
Φ(x,f(x)) = Φ(x,y) = 0(3.1.4)
then, according to the Implicit Functions Theorem [54] the Jacobian f
exists if
det
Φ
y= 0(3.1.5)
and is given by:
f=
Φ
y1
Φ
x(3.1.6)
3.2. Uncertainty of plane tting
The methodological pipeline presented in this paper starts with
tting planes to the noisy 3D points. The tting operation is used to
estimate the parameters of a geometric object by the minimization of a
chosen error function. The presented procedure uses a least-square
method that minimalizes the sum of the squares of orthogonal dis-
tances from the points to the plane. The cost function works on the
assumption that each point is subject to errors in all dimensions (instead
of a single dimension only, like for example an ordinary linear regres-
sion), the whole data set can be investigated in one coordinate system,
and does not require rotation or transformation to the individual coor-
dinate system of each plane.
According to Eq. 3.1.2, the merged covariance matrix Λp of all points
and the Jacobian LLS of the linear least squares problem are needed to
estimate error propagation of the tting process. Since in terrestrial laser
scanning the incidence angle affects the quality of a measured point, the
uncertainty of each input point is computed individually. The matrix Λp
is obtained by cascading the covariance matrices of all the individual
points:
Λp=
Λp10 0
0 0
0 0 Λpm
(3.2.1)
A plane in 3D space is represented by the equation:
ax +by +cz +d=0(3.2.2)
where a,b,c parameters are the components of a normal vector n=
[a,b,c]T, d denotes the distance from the origin, and x,y,z describe the
coordinates of a point on the plane.
A set of N>3 homogenized points pi = (xi,yi,zi,1), i= {1N)can be
arranged in a design matrix D, such that:
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
8
D=
x1y1z11
x2y2z21
xNyNzN1
(3.2.3)
If the points are assumed to be coplanar, but also perturbed with
noise, the following equation is satised:
Dy=e(3.2.4)
where y is a vector representing the plane parameters and e is a vector of
residual errors.
Considering normalized, mean-free points, the tting process may be
dened as a minimization problem of a form:
minyDysubject to y = 1(3.2.5)
The solution can be then found by solving the eigenvalue problem:
Sy=λy(3.2.6)
where the scatter matrix S is dened as:
S=DTD(3.2.7)
The method delivers coefcients of the normal vector to the tted
plane. The last plane parameter, dis found by back substitution:
d= [x y z] [a b c]T(3.2.8)
Since the linear least square problem is a special case of implicit
functions, eigenvalue analyses may be further used to calculate an
approximation of the Jacobian of the dened minimization process.
Using Eq. 3.2.6 under the assumption that the eigenvalue λmin 0, we
implicitly dene y as:
Φ=Sy=0(3.2.9)
which yields:
Φ
y1
=S1(3.2.10)
Considering that:
Φ
x=
Φ
S
S
Φ(3.2.11)
the Jacobian of the linear-least-squares tting process is then given
according to Eq. 3.1.6:
LLS = S1
Φ
S
S
x(3.2.12)
In accordance with Eq. 3.1.2 and Eq. 3.2.1, the covariance matrix of
the normal vector coefcients is:
Λn= LLSΛpT
LLS (3.2.13)
The complete vector of the plane parameters is determined by the
following functional relationship:
y=f
π
(a,b,c,x,y,z) = [a b c (ax +by +cz) ]T(3.2.14)
Hence, the Jacobian f
π
of this function yields:
f
π
=
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
xyzabc
(3.2.15)
Since the merged covariance matrix of the input parameters is now:
Λnpm=Λn0
0Λpm(3.2.16)
where m is the number of input points, then the nal covariance matrix
of the tted plane is computed as:
Λ
π
= f
π
ΛnpmfT
π
(3.2.17)
and can be directly used to extract the uncertainties of the estimated
plane parameters and their correlations.
Furthermore, the vertical slope s of the extracted plane is computed
based on the angle between the plane normal vector and the horizontal
plane:
s=cos1c

a2+b2+c2
(3.2.18)
The related uncertainty of the plane slope is estimated taking the
covariance matrix of the normal vector Λn and the Jacobian of the
function s (Eq. 3.2.15):
fs=
(a,b,c)
(s)(3.2.19)
and given by the following relation:
Λs= fsΛnfs
T(3.2.20)
3.3. Positional uncertainty of model vertices
Based on the analytically derived covariance matrices of the tted
planes (Eq. 3.2.17), we investigate the error sensitivity of the recon-
struction algorithm that estimates the coordinates of model vertices.
Given three planes l,g,m with the covariance matrices Λl,Λg,Λm, the
intersection point p is computed as:
p=
X
Y
Z
=
blcgdmblcmdgbgcldm+bgcmdl+bmcldgbmcgdl)t
alcgdmalcmdgagcldm+agcmdl+amcldgamcgdl)t
albgdmalbmdgagbldm+agbmdl+ambldgambgdlt
(3.3.1)
where
t=albgcmalbmcgagblcm+agbmcl+amblcgambgcl
and a,b,c,d denote the relating plane parameters of planes l,g,m, ac-
cording to Eq. 3.2.2.
The covariance matrix of the data is block diagonal in the form:
Λlgm =
Λl0 0
0Λg0
0 0 Λm
(3.3.2)
Based on Eq. 3.3.1, the Jacobian of the point coordinates function is
derived by:
fp=
al,bl,cl,dl,ag,bg,cg,dg,am,bm,cm,dm
(x,y,z)(3.3.3)
The X,Y,Z coordinates of the intersection point are then given by the
covariance matrix described as:
Λp= fpΛlgmfp
T(3.3.4)
3.4. Positional uncertainty of model edges
The covariance matrices of the tted plane parameters, estimated in
Section 3.2, are also used to derive a general model of error propagation
for the uncertainty of the model edges. A directional vector of an
intersection 3D line described by two planes l,g is computed as:
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
9
v=
vi
vj
vk
=
blcg+bgcl
agclalcg
albgagbl
(3.4.1)
Based on the Jacobian of the function v:
fv=
al,bl,cl,ag,bg,cg
vi,vj,vk(3.4.2)
and the merged covariance matrices of plane normal vectors Λln,Λgn,
related to a,b,c parameters, we derive the covariance matrix of the
intersection line directional vector parameters by:
Λv= fvΛln,gnfvT(3.4.3)
The inclination of the 3D edge of the model with respect to the
horizontal plane is given by:
se=cos1vk

vi2+vj2+vk2
(3.4.4)
and its corresponding covariance matrix Λse, computed based on the
Jacobian of se:
fse=
vi,vj,vk
(s)(3.4.5)
is then derived as:
Λs= fsefvΛln,gnfvTfse
T(3.4.6)
3.5. Estimation probability
The derived covariance matrices provide us with the estimated
standard deviations (SDβ)of the geometrical parameters (β), given by
the corresponding square roots along the diagonal of the respective
matrix. We further use this information to compute the range for the
expected values of the estimated parameters. The given numbers reect
the probability of a BIM element to be contained within a certain ac-
curacy range and enable to assign its corresponding LOA (c.f. Section
2.3). Assuming normal distribution of the errors in the tted data, we
compute condence interval CIβat the 95% condence level which
can be regarded as the standard in surveying:
CIβ=β±1.96*SDβ(3.5.1)
4. Experiments and results
4.1. Input data
The presented methodological pipeline forms a general framework
that can be used for the evaluation of the uncertainty of the plane-based
3D reconstruction process. Modeling objects can take various forms that
can result from plane intersections and be described by edges and cor-
ners. The input data for the algorithm contains 3D points with estimated
uncertainties, irrespective of the data collection technique and point
cloud generation method. In the experiment described below, we check
the suitability of the framework using exemplary data for the modeling
of indoor building primary elements (walls, ceilings, oors). The test
object in our study presents an indoor area covering 22 m ×15 m,
located in a building of mainly educational and ofce use. The input
data set contains coordinates of 2 million points, acquired by Leica
BLK360. The scanner has a declared 3D accuracy of 6 mm at 10 m.
According to the technical specication, the angular precision is equal to
40
, for both vertical and horizontal angles. The scanner has a range
accuracy of 4 mm at 10 m and 7 mm at 20 m, which can be assumed to be
a linear relationship throughout the entire range (0.6 m to 60 m) of the
device. In the presented experiment, individual accuracy values for each
3D input point are computed based on the distance between the point
and the scanner, and the related incidence angle. The developed meth-
odological chain (c.f. Section 3) is tested on a 3D point cloud segmented
into planar patches by the segmentation algorithm presented in [55].
Starting from the input set of classied wall points, the algorithm de-
termines the best tting planes based on the predened criteria, using a
regression method and least squares adjustment. As a result, the whole
data set is grouped into segments associated with recognized 3D planes.
The selection of the candidate points to the growing plane is based on
the plane-point distance of 5 mm. In case of large elements like building
structural parameters such thresholds enable to avoid under segmenta-
tion. Over segmentation is controlled by analyzing topological relation
between neighboring segments and merging similar parallel segments.
Considering that we have already roughly classied data with only
single points belonging to clutters, such implementation enables to
provide the segmentation results without the aforementioned errors.
The collected input data and the corresponding 3D indoor model are
visualized in Fig. 2. To facilitate the presentation of the consecutive
steps of the processing pipeline, additional detailed analyses are per-
formed on a small part of exemplary data, depicted in Fig. 3. The ana-
lyses aim at the visualization of the dependency between computed
numerical values of the reconstruction uncertainty and the corre-
sponding 3D objects.
4.2. Model evaluation
4.2.1. Fitted planes
The practical application of the developed solution starts with the
computation of the uncertainty of plane tting. During the preprocess-
ing stage, a raw point cloud is automatically segmented into planar
patches. The results of this operation are depicted in Fig. 3b. The
exemplary numerical statistics (related to the small model) are collected
in Table 2. The numbers refer to the estimated parameters of each
detected building plane (according to Eq. 3.2.2). The v error, calculated
based on the parameters of the plane a,b,c, relates to the orientational
part of the estimated plane, whereas the d error refers to the location
part of the plane (its distance from the origin).The errors of the esti-
mated plane parameters form the base for the subsequent derivation of
the plane slopes (c.f. Eqs. 3.2.183.2.20).
The computed metrics also provide the number of points for each
segment, the size of the corresponding wall, and the resulting point
density of the wall coverage. Large differences in the number of points
per plane (about 9×) are mostly related to occlusions and wall openings.
Additionally, we calculate a mean absolute distance of 3D points to the
estimated plane. The residuals, computed for the whole data set (Fig. 2),
oscillate between 1 mm and 14 mm, with a median of 4 mm, also con-
rming the assumption of scanner accuracy in Section 4.1. The values
are compared against the uncertainties of plane slope estimation in
Fig. 4.
Although mean point-plane residuals are often applied in 3D
modeling eld as an indicator for plane-tting quality, in fact, there is no
direct correlation between these two indicators. The same lack of cor-
relation is conrmed in the visual analysis of the small data model
(Fig. 2). For example, the point segment assigned with plane 6 reveals
large point-plane residuals, while its uncertainty of plane tting, re-
mains relatively small. It is also visible that the plane tting process
achieved the highest uncertainty for the segment 4, which is the wall
with the worst data coverage. The same dependencies are observed in
the large model. The inverse dependency between the number of wall
points and tting uncertainty is illustrated in Fig. 4. Besides data
coverage, also the overall shape of the plane affects the tting quality,
leading to higher uncertainty related to the planes with large height-to-
length ratio. The accuracy of the estimated plane slope varies from
0.0004 deg. up to 0.0542 deg. and is inversely correlated with the
number of the corresponding 3D points used for the estimation.
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
10
4.2.2. Model vertices
In the following step of 3D model evaluation, we investigate the
sensitivity of the reconstructed model vertices to the plane tting un-
certainties. The exemplary values of the small model referring to the 3D
vertex accuracy (maximum point displacements in 3D space), together
with the information related to the corresponding intersection planes
are presented in Table 3. We can observe that point errors related to z
coordinate are nearly constant for all the points, while the planar un-
certainty varies. The direct impact of the number of points used for the
estimation of a 3D vertex on its uncertainty (presented in Fig. 6) is not
observed anymore, contrary to the previous step of plane tting (c.f.
Fig. 5).
The largest 3D errors are observed for the bottom vertices created by
the intersection with plane 4, which has the highest tting uncertainty
and no points in the lower segment part. Although plane tting errors
propagate into the errors of the 3D vertices, the nal uncertainty of the
result is also affected by the geometrical conguration of the collected
data. The intersection of planes with large average slope errors does not
directly lead to large uncertainty of the resulting vertex. In the visuali-
zation of the results, we can notice a clear dependency between vertex
neighborhood visibility and vertex uncertainty. The average 3D posi-
tional error of the whole data set, based on 208 intersection points, is
equal to 1.0 mm, with values ranging from 0.3 mm to 2.6 mm.
Using the standard error analytically derived from the covariance
matrix of the intersection point coordinates, we can determine con-
dence regions for the estimation of positional uncertainty (Section 3.5).
For the 3D indoor model reconstructed in our experiment (presented in
Fig. 2), the 3D positional uncertainty of the vertex coordinates deter-
mined at the 95% condence level falls within the range of 0.55.1 mm,
with the average value of 2.0 mm. For almost all of the vertices (98%, c.f.
Fig. 7) the estimated values are smaller than 5 mm. In terms of USIBD
specication (Table 1), the reconstructed building elements of the whole
model can be therefore assigned to the accuracy levels from LOA30 up to
LOA50. The geometry uncertainty of the reconstructed 3D model is
visualized in Fig. 8.
4.2.3. Model edges
In the last numerical experiment, we assess how the plane tting
errors affect the positional uncertainty of model edges generated by the
intersection of two planes. Based on the estimated parameters of 312
edges 3D line directional vectors and the associated covariance
matrices we derived the slope of the model edge, computed as an
inclination to the horizontal plane, as well as its corresponding uncer-
tainty (exemplary values for the small data set are collected in Table 4).
The largest uncertainties are observed for the edge A and edge B, which
are generated by the intersection with plane 4, with the worst tting
quality. The impact of input point geometry on the uncertainty of the
reconstructed edges is observed again, but its effect is lower than in the
case of vertex estimation. This is, for example, visible for the edge H,
which is nearly completely occluded however its quality measures are
comparable to the results computed for the very well captured ceiling
edges. The mean slope error of all the model edges is equal to 0.0094,
however, the results signicantly differ between vertical edges (average
errors 0.0225) and horizontal ones (0.0028). The results are compared
and depicted in Fig. 9. In almost all the cases, the orthogonality of a
building model, commonly achieved in the building reconstruction by
the forced model regularization, is not indicated by the computed values
of edge slopes and their uncertainties. Because of a common practice in
the 3D modeling eld to enforce the orthogonal representation of the
generated models, we investigate the probability that the reconstructed
edges indeed fulll the orthogonality constraints. Level of condence
measures are computed for the entire dataset using an analytically
determined covariance matrix of the edge slope and estimating upper
and lower condence limits. For the 3D indoor model reconstructed in
our experiment, the results enable to interfere with 95% probability that
only 2% of the reconstructed edges could be indeed considered
orthogonal.
5. Conclusion
The building modeling eld lacks research investigating the impact
of the reconstruction procedure on the accuracy of generated BIM,
concerning given uncertainties of the data. This study lls this gap by
establishing a general framework for the investigation of uncertainties
in 3D reconstruction. The paper gives an insight into error propagation
analysis for the estimation of building model geometric parameters,
based on the Taylor series approximation. The novelty of the presented
approach results from the consideration of statistical uncertainties in the
modeling process. This allows for internal quality analyses of the vec-
torized models, without a need for reference data. Furthermore, we
show how to apply the computed uncertainties to infer a nal accuracy
assessment based on the Level of Accuracy specication for BIM. The
probability of the reconstructed building elements to belong to certain
tolerance thresholds is quantied, which enables to assign the elements
with the related LOA.
The presented workow is suited for BIM objects wherein shapes can
be determined by planes and their intersections. It may be extended to
object representations including non-planar geometric primitives such
as cylinders, hemispheres or also NURBS. Each consecutive step of 3D
reconstruction is related to the derived error propagation model. Taking
as an input segmented 3D points, we analytically estimate the covari-
ance matrix of the tted plane parameters and investigate the un-
certainties associated with the reconstructed model elements computed
based on them. The inuence of input errors on the uncertainty of the
reconstruction results is thoroughly studied and quantied. We also
investigate the impact of data density and spatial conguration on the
accuracy of the reconstructed elements. The primary aim of the research
is to develop a general model for the evaluation of the uncertainty of the
3D reconstruction process, regardless of the modeling object type. For
the demonstration of a practical solution, the developed procedure was
applied to an exemplary indoor 3D point cloud. For the virtual model
automatically generated from the input data, the analyses reveal the
average positional error of the 3D vertex equals 1.0 mm. Expected po-
sitional uncertainties of the model vertices determined at the 95%
condence level range from 0.5 to 5.1 mm. In terms of LOA specica-
tion, 78% of the reconstructed vertices can be assigned to LOA40. The
mean error of the inclination of the model 3D edges to the horizontal
plane is equal 0.0094. For 98% of the edges, slope values of 0or 90
commonly used for building model regularization, are out of the con-
dence levels computed at the 95% probability level. Assuming the reli-
ability of the scanner self-leveling, such results would question the
appropriateness of the commonly used orthogonality constraints if real
as-build conditions are the goal of the modeling process.
The underlying methodology so far neglects unatness of a surface
assuming that the planes being reconstructed are exactly planar. In
future work, this hypothesis will be veried using for instance residuals
from the input data. The work also forms a basis to test the hypothesis on
whether it is justied to impose regularity rules on the reconstructed
elements of the model. Furthermore, follow-up work will take under
consideration systematic (and thus correlated) errors related to the
calibration of the terrestrial scanner and investigate alternative sources
for 3D point cloud acquisition (e.g. personal laser scanners, or smart-
phones equipped with lidar sensors). In the presented practical part of
the experiment, we demonstrate the suitability of the developed
framework for the modeling of indoor building primary elements. We
plan to enhance the scope of the investigation using BIM developed for a
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
11
very precise representation of building features with high detailing
requirements.
Declaration of Competing Interest
Malgorzata Jarzabek-Rychard reports nancial support was pro-
vided by Polish National Agency for Academic Exchange.
Data availability
Data will be made available on request.
Acknowledgements
The project is nanced by the Polish National Agency for Academic
Exchange as part of the Mieczysław Bekker Programme.
References
[1] S. Herle, R. Becker, R. Wollenberg, GIM and BIM, PFG J. Photogramm. Remote
Sens. Geoinforma. Sci. 88 (2020) 3342, https://doi.org/10.1007/s41064-020-
00090-4.
[2] S. Azhar, Building information modeling (BIM): trends, benets, risks, and
challenges for the AEC industry, Leadersh. Manag. Eng. 11 (2011) 241252,
https://doi.org/10.1061/(ASCE)LM.1943-5630.0000127.
[3] J. Li, L. Hou, X. Wang, J. Wang, J. Guo, S. Zhang, Y. Jiao, A project-based
quantication of BIM benets, Int. J. Adv. Robot. Syst. 11 (2014) 123136, https://
doi.org/10.5772/58.
[4] T. Czerniawski, F. Leite, Automated digital modeling of existing buildings: A
review of visual object recognition methods, Autom. Constr. 113 (2020), 103131,
https://doi.org/10.1016/j.autcon.2020.103131.
[5] S. Nikoohemat, A. Diakit´
e, S. Zlatanova, G. Vosselman, Indoor 3D reconstruction
from point clouds for optimal routing in complex buildings to support disaster
management, Autom. Constr. 113 (2020) 117, https://doi.org/10.1016/j.
autcon.2020.103109.
[6] F. Biljecki, G.B. Heuvelink, H. Ledoux, J. Stoter, Propagation of positional error in
3D GIS: estimation of the solar irradiation of building roofs, Int. J. Geogr. Inf. Sci.
29 (12) (2015) 22692294, https://doi.org/10.1080/13658816.2015.1073292.
[7] H. Hajian, B. Becerik-Gerber, Scan to BIM: factors affecting operational and
computational errors and productivity loss, Int. Symp. Automation Robot.
Construct. 27 (2010) 265272, https://doi.org/10.22260/ISARC2010/0028.
[8] H. Tran, K. Khoshelham, A. Kealy, Geometric comparison and quality evaluation of
3D models of indoor environments, ISPRS J. Photogramm. Remote Sens. 149
(2019) 2939, https://doi.org/10.1016/j.isprsjprs.2019.01.012.
[9] K. Khoshelham, H. Tran, D. Acharya, L.D. Vilari˜
no, Z. Kang, S. Dalyot, Results of
the ISPRS benchmark on indoor modeling, ISPRS Open J. Photogrammet. Remote
Sensing 2 (2021), 100008, https://doi.org/10.1016/j.ophoto.2021.100008.
[10] P. Tang, D. Huber, B. Akinci, R. Lipman, A. Lytle, Automatic reconstruction of as-
built building information models from laser-scanned point clouds: a review of
related techniques, Autom. Constr. 19 (2010) 829843, https://doi.org/10.1016/j.
autcon.2010.06.007.
[11] V. P˘
atr˘
aucean, I. Armeni, M. Nahangi, J. Yeung, I. Brilakis, C. Haas, State of
research in automatic as-built modeling, Adv. Eng. Inform. (2015) 162171,
https://doi.org/10.1016/j.aei.2015.01.001.
[12] G. Pintore, C. Mura, F. Ganovelli, L. Fuentes-Perez, R. Pajarola, E. Gobbetti, State-
of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments,
EUROGRAPHICS 39 (2) (2020), https://doi.org/10.1111/cgf.14021.
[13] H. Son, C. Kim, Y. Turkan, Scan-to-BIM-an overview of the current state of the art
and a look ahead, Proc. Int. Symp. Automation Robotics Construct. 32 (2015) 18,
https://doi.org/10.22260/ISARC2015/0050.
[14] M. Kalantari, M. Nechifor, Accuracy and utility of the structure sensor for
collecting 3D indoor information. Geospatial Information, Science 19 (2016)
202209, https://doi.org/10.1080/10095020.2016.1235817.
[15] S. Nikoohemat, M. Peter, S. Oude Elberink, G. Vosselman, Semantic interpretation
of mobile laser scanner point clouds in indoor scenes using trajectories, Remote
Sens. 10 (2018) 1754, https://doi.org/10.3390/rs10111754.
[16] T. Meyer, A. Brunn, U. Stilla, Change detection for indoor construction progress
monitoring based on BIM, point clouds and uncertainties, Autom. Constr. 141
(2022), 104442, https://doi.org/10.1016/j.autcon.2022.104442.
[17] R. Maalek, D.D. Lichti, J.Y. Ruwanpura, Automatic recognition of common
structural elements from point clouds for automated Progress monitoring and
dimensional quality control in reinforced concrete construction, Remote Sens. 11
(2019) 1102, https://doi.org/10.3390/rs11091102.
[18] Z. Zhenhua, B. Ioannis, Comparison of optical sensor-based spatial data collection
techniques for civil infrastructure Modeling, J. Comput. Civ. Eng. 23 (2009)
170177, https://doi.org/10.1061/(ASCE)0887-3801(2009)23:3(170).
[19] S. Soudarissanane, R. Lindenbergh, M. Menenti, P. Teunissen, Scanning geometry:
inuencing factor on the quality of terrestrial laser scanning points, ISPRS J.
Photogramm. Remote Sens. 66 (2011) 389399, https://doi.org/10.1016/j.
isprsjprs.2011.01.005.
[20] M. Golparvar-Fard, J. Bohn, J. Teizer, S. Savarese, F. Pena-Mora, Evaluation of
image-based modeling and laser scanning accuracy for emerging automated
performance monitoring techniques, Autom. Constr. 20 (2011) 11431155,
https://doi.org/10.1016/j.autcon.2011.04.016.
[21] D. Wujanz, M. Burger, F. Tschirschwitz, T. Nietzschmann, F. Neitzel, T.P. Kersten,
Determination of intensity-based stochastic models for terrestrial laser scanners
utilising 3D-point clouds, Sensors 18 (2018) 7, https://doi.org/10.3390/
s18072187.
[22] K. Tan, W. Zhang, F. Shen, X. Cheng, Investigation of TLS intensity data and
distance measurement errors from target specular reections, Remote Sens. 10
(2018) 1077, https://doi.org/10.3390/rs10071077.
[23] K.R. Koch, Evaluation of uncertainties in measurements by Monte Carlo
simulations with an application for laser scanning, J. Appl. Geod. 2 (2008), https://
doi.org/10.1515/JAG.2008.008.
[24] D.D. Lichti, Error modeling, calibration and analysis of an AM-CW terrestrial laser
scanner system, ISPRS J. Photogramm. Remote Sens. 61 (5) (2007) 307324,
https://doi.org/10.1016/j.isprsjprs.2006.10.004.
[25] C. Holst, T. Medic, H. Kuhlmann, Dealing with systematic laser scanner errors due
to misalignment at area-based deformation analyses, J. Appl. Geod. 12 (2) (2018)
169185, https://doi.org/10.1515/jag-2017-0044.
[26] C. Mura, O. Mattausch, A.J. Villanueva, E. Gobbetti, R. Pajarola, Automatic room
detection and reconstruction in cluttered indoor environments with complex room
layouts, Comput. Graph. 44 (2014) 2032, https://doi.org/10.1016/j.
cag.2014.07.005.
[27] K. Khoshelham, L. Díaz-Vilari ˜
no, 3D modeling of interior spaces: learning the
language of indoor architecture, Int. Arch. Photogramm. Remote. Sens. Spat. Inf.
Sci. 40 (2014) 321326, https://doi.org/10.5194/isprsarchives-XL-5-321-2014.
[28] M. Previtali, L. Díaz-Vilari ˜
no, M. Scaioni, Indoor building reconstruction from
occluded point clouds using graph-cut and ray-tracing, Appl. Sci. 8 (2018) 1529,
https://doi.org/10.3390/app8091529.
[29] J. Jung, C. Stachniss, S. Ju, J. Heo, Automated 3D volumetric reconstruction of
multiple-room building interiors for as-built BIM, Adv. Eng. Inform. 38 (2018)
811825, https://doi.org/10.1016/j.aei.2018.10.007.
[30] F. Yang, G. Zhou, F. Su, X. Zuo, L. Tang, Y. Liang, H. Zhu, L. Li, Automatic indoor
reconstruction from point clouds in multi-room environments with curved walls,
Sensors 19 (2019) 17, https://doi.org/10.3390/s19173798.
[31] X. Xiong, A. Adan, B. Akinci, D. Huber, Automatic creation of semantically rich 3D
building models from laser scanner data, Autom. Constr. 31 (2013) 325337,
https://doi.org/10.1016/j.autcon.2012.10.006.
[32] B. Quintana, S. Prieto, A. Ad´
an, A.S. V´
azquez, Semantic scan planning for indoor
structural elements of buildings, Advance Engineering, Information 30 (2016)
643659, https://doi.org/10.1016/j.aei.2016.08.003.
[33] M. Bassier, R. Klein, B. Van Genechten, M. Vergauwen, IFC wall reconstruction
from unstructured point clouds, in: ISPRS Annals of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, IV-2, 2018, pp. 3339, https://doi.org/
10.5194/isprs-annals-IV-2-33-2018.
[34] S. Ochmann, R. Vock, R. Klein, Automatic reconstruction of fully volumetric 3D
building models from oriented point clouds, ISPRS J. Photogramm. Remote Sens.
151 (2019) 251262, https://doi.org/10.1016/j.isprsjprs.2019.03.017.
[35] M. Jarząbek-Rychard, H.G. Maas, Automatic enrichment of indoor 3D models using
a deep learning approach based on single images with unknown camera poses,
ISPRS Ann. Photogram. Remote Sens. Spatial Informa. Sci. 8 (2022) 17, https://
doi.org/10.5194/isprs-annals-VIII-5-W120221-2022.
[36] M. Previtali, L. Díaz-Vilari ˜
no, M. Scaioni, Towards automatic reconstruction of
indoor scenes from incomplete point clouds: door and window detection and
regularization, Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. XLII-4 (2018)
507514, https://doi.org/10.5194/isprs-archives-XLII-4-507-2018.
[37] H. Tran, K. Khoshelham, Procedural reconstruction of 3D indoor models from lidar
data using reversible jump markov chain Monte Carlo, Remote Sens. 12 (2020)
838, https://doi.org/10.3390/rs12050838.
[38] M.E. Esfahani, C. Rausch, M.M. Sharif, Q. Chen, C. Haas, B.T. Adey, Quantitative
investigation on the accuracy and precision of scan-to-BIM under different
modeling scenarios, Autom. Constr. 126 (2021), 103686, https://doi.org/10.1016/
j.autcon.2021.103686.
[39] J. Xiao, Y. Furukawa, Reconstructing the worlds museums, Int. J. Comput. Vis.
110 (2014) 243258, https://doi.org/10.1007/s11263-014-0711-y.
[40] S. Becker, M. Peter, D. Fritsch, Grammar-supported 3D indoor reconstruction from
point clouds for as-built BIM, in: ISPRS Annals of the Photogrammetry, Remote
Sensing and Spatial Information Sciences II-3/W4, 2015, pp. 1724, https://doi.
org/10.5194/isprsannals-II-3-W4-17-2015.
[41] H. Tran, K. Khoshelham, A. Kealy, L. Díaz-Vilari˜
no, Extracting topological relations
between indoor spaces from point clouds, in: ISPRS Annals of the Photogrammetry,
Remote Sensing and Spatial Information Sciences IV-2/W4, 2017, pp. 401406,
https://doi.org/10.5194/isprs-annals-iv-2-w4401-2017.
[42] L. Díaz-Vilari˜
no, K. Khoshelham, J. Martínez-S´
anchez, P. Arias, 3D modeling of
building indoor spaces and closed doors from imagery and point clouds, Sensors 15
(2015) 34913512, https://doi.org/10.3390/s150203491.
[43] C. Mura, O. Mattausch, R. Pajarola, Piecewise-planar reconstruction of multi-room
interiors with arbitrary wall arrangements, Comput. Graphics Forum 35 (7) (2016)
179188, https://doi.org/10.1111/cgf.13015.
[44] H. Macher, T. Landes, P. Grussenmeyer, From point clouds to building information
models: 3D semi-automatic reconstruction of indoors of existing buildings, Appl.
Sci. 7 (2017) 1030, https://doi.org/10.3390/app7101030.
[45] V.V. Lehtola, H. Kaartinen, A. Nüchter, R. Kaijaluoto, A. Kukko, P. Litkey,
E. Honkavaara, T. Rosnell, M.T. Vaaja, J.P. Virtanen, M. Kurkela, Comparison of
M. Jarząbek-Rychard and H.-G. Maas
Automation in Construction 154 (2023) 105002
12
the selected state-of-the-art 3D indoor scanning and point cloud generation
methods, Remote Sens. 9 (2017) 796, https://doi.org/10.3390/rs9080796.
[46] H. Fang, F. Lafarge, C. Pan, H. Huang, Floorplan generation from 3D point clouds:
A space partitioning approach, ISPRS J. Photogramm. Remote Sens. 175 (2021)
4455, https://doi.org/10.1016/j.isprsjprs.2021.02.012.
[47] S. Tang, X. Li, X. Zheng, B. Wu, W. Wang, Y. Zhang, BIM generation from 3D point
clouds by combining 3D deep learning and improved morphological approach,
Autom. Constr. 141 (2022) 104422, https://doi.org/10.1016/j.
autcon.2022.104422.
[48] U.S. Institute of Building Documentation, Level of Accuracy Specication 3.0. htt
ps://www.usibd.org, 2019 (last access: 2023-15-05).
[49] J. Meidow, C. Beder, W. Foerstner, Reasoning with uncertain points, str, aight
lines, and straight line segments in 2D, ISPRS J. Photogramm. Remote Sens. 64 (2)
(2009) 125139, https://doi.org/10.1016/j.isprsjprs.2008.09.013.
[50] D. Iwaszczuk, U. Stilla, Camera pose renement by matching uncertain 3D building
models with thermal infrared image sequences for high quality texture extraction,
ISPRS J. Photogramm. Remote Sens. 132 (2017) 3347, https://doi.org/10.1016/j.
isprsjprs.2017.08.006.
[51] M. Jarząbek-Rychard, H.G. Maas, Uncertainty modeling for point cloud-based
automatic indoor scene reconstruction by strict error propagation analysis, in: The
International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences 43, 2022, https://doi.org/10.5194/isprs-archives-XLIII-B2-
2022-395-2022.
[53] J.C., ClarkeModelling Uncertainty: A Primer. Technical Report, 2161/98,
Department of Engineering Science, University of Oxford, 1998. https://citeseerx.
ist.psu.edu/document?repid=rep1&type=pdf&doi=e21d5aa5c03c48bc08133
b26cd8e9df67b01fc9a (last access: 2023-15-05).
[54] S. Krantz, H. Parks, The Implicit Function Theorem, Modern Birkhauser Classics,
2003. ISBN 081764285-4.
[55] M. Jarząbek-Rychard, A. Borkowski, 3D building reconstruction from ALS data
using unambiguous decomposition into elementary structures, ISPRS J.
Photogramm. Remote Sens. 118 (2016) 112, https://doi.org/10.1016/j.
isprsjprs.2016.04.005.
M. Jarząbek-Rychard and H.-G. Maas
... To enhance the evaluation criteria for reconstructed digital models using Scan-to-BIM approaches, Jarzabek-Rychard and Maas [35] introduced an analytical method to estimate the level of geometric uncertainty in reconstructed digital models. The proposed approach assesses the dependency between point cloud data and the resulting 3D model. ...
... It can be used to aid in the development of the PreDA through the creation of a 3D BIM model to assess the building in the study. Scan-to-BIM uses several processes, starting with data acquisition (LIDAR) to capture a point cloud, that is brought into BIM software where it is analysed and converted into a 3D BIM model [18]. ...
Chapter
Full-text available
One of the main issues with applying Circular Economy (CE) principles to the construction sector sits at the End-of-Life (EoL) of buildings. How to recover the materials and then how to reintroduce them into the economy are fundamental problems that lack immediate solutions. The status quo in the EoL of buildings has always been demolition followed by deposition at a landfill (linear economy), thus, to change this approach, there is the need to replace demolition with deconstruction. This causes new problems, as buildings vary greatly, there is a need for pre-demolition audits, that can report on the recoverable materials, potential generated waste and plan the deconstruction intervention. Here, new problems arise, such as the lack of methodologies to intervene or skilled labour that makes deconstruction possible. However, at that point, even when materials are recovered there is the problem of how to reintroduce those materials back into the market. Here, digital platforms can bridge that gap, making it possible for the recovered materials to be posted in a marketplace where the designers of new buildings (or building renovations) can access the circular materials available to introduce into their designs. Thus, this paper aims to present a possible solution to the problem of introducing CE into the built environment, proposing pre-demolition audits, digital platforms, and labour upskilling as enablers for a greener future.
... In the process of reverse engineering using data obtained by technology with laser scanning it is possible to estimate the geometry uncertainty in 3D modelling based on an application of geodetic modelling [17]. Statistical uncertainty is incorporated into the building reconstruction process and procedures that enable self-verification. ...
Article
Reverse engineering is a method of obtaining information about the geometry of an existing object. To obtain such information, among others: 3D laser scanning is used. The result of measurements using this method is a point cloud. The presented research examined the possibilities of using scanning data to analyze the technical condition of a historical building. Thanks to the point cloud, the deflections of beams and ceilings were determined. The course and size of scratches, cracks and defects were determined. It was found that the basic factor increasing the usefulness of the point cloud for various analyzes is its density, which depends on the accuracy of the scan performed. Thanks to a detailed point cloud, a digital three-dimensional model (digital twin) of the existing object was created and analyzed using computer methods. The aim of the presented research was to evaluate the use of reverse engineering to analyze the technical condition of a historic water tower. The thesis was put forward that laser scanning and reverse engineering are good methods supporting the assessment of the technical condition of a building. Based on the available literature and performed in situ tests, using the example of a historical water tower, the problem was described and analyzes were carried out to assess the technical condition of the building. Field measurements were also carried out with a laser scanner and a tachymeter in this direction. Then, measurements were made on a digital model regarding the deflection of beams and ceilings on individual floors. The article describes research on sample structural elements. As research has shown, a point cloud resulting from reverse engineering helps analyze the technical condition of a given object. It helps to accurately test the deflections of beams and ceilings. It has been shown that it is possible to directly read the elevations of the bottom of a beam or ceiling and determine whether the existing deflection is within the technical standards. After a deeper analysis of the data, it can also be determined whether a given beam is buckled. The point cloud was also used indirectly to examine the deviation of the tower axis from the vertical and the results were compared with data from geodetic and geotechnical studies.
Article
[Free PDF:🌐 https://authors.elsevier.com/a/1jIVl3IhXM-Iyn 🌐 until Aug 2024] Existing construction activity-monitoring technologies, such as CCTV cameras and IoT devices, have limitations, such as lack of depth information, 3D measurement errors, or wireless signal vulnerability. The limitations are particularly problematic for activities related to mobile cranes due to their high mobility and flexibility. This paper presents a 4D point cloud (4DPC)-based spatial-temporal semantic registration method to overcome the limitations. The proposed method integrates spatial-temporal semantic registration into process site 4DPC with as-designed BIM semantics. Results from a one-hour on-site experiment demonstrated that the proposed method achieved 99.93-100% F1 accuracy in detecting BIM objects, and high resolution (centimeter-second granularity) of the trajectories of hoisting activities. This paper offers a twofold contribution. First, spatial-temporal semantic registration represents an innovative approach to 4D point cloud (4DPC) processing. Secondly, the hoisting activities are comprehensively analyzed based on semantic registration, which can improve safety and productivity monitoring for smarter construction in the future.
Article
Teknologi fotogrametri rentang dekat terbaru yaitu scanner telah digunakan sebagai teknik survei model 3D yang sering digunakan dikarenakan oleh kepraktisan dan kualitas hasilnya. Namun harga perangkat yang terkait tidak dapat dibeli oleh sebagian besar orang maupun instansi, sehingga pada penelitian ini dilakukan pengujian hasil model 3D dari perangkat DSLR (Digital Single Lens Reflex) sebagai alternatif baru berbiaya rendah. Dilakukan akuisisi dari perangkat DSLR dan pengolahan dengan perangkat lunak AliceVision Meshroom dan dilakukan pengujian validasi dengan uji geometrik posisi dan jarak dengan ICP (Independent Check Point) dan T-Test. Dihasilkan model DSLR yang hanya dapat memodelkan bagian badan bangunan candi saja dengan detail relief yang cukup tinggi dan pada uji validasi didapatkan bagian akurasi geometrik posisi masuk ke dalam kelas UDLOA Lower Range dengan CE90 sebesar 0,0904 dan LE90 sebesar 0,0924 serta pada bagian akurasi geometrik jarak masuk ke dalam kelas LOA30 Lower Range dengan jumlah error sebesar 1,1937 m dan rata-rata error sebesar 0,0173 m. Dapat disimpulkan bahwa metode DSLR tidak direkomendasikan sebagai metode pemodelan 3D berbiaya rendah dikarenakan oleh model bangunan bagian atap yang tidak berhasil dimodelkan.
Article
Evaluation of pavements' crack severity levels currently relies heavily on width measurement, which necessitates the development of a rapid, and high-accurate, automatic measurement approach for complex pavement cracks. This paper presents an OrthoBoundary algorithm that leverages the crack boundary and skeleton directions to determine crack propagation. Comparative analysis has been conducted between OrthoBoundary and Area-Length, Skeleton Shortest Distance (SSD), Edge Shortest Distance (ESD), and Orthogonal Projection (OP) methods. Results indicate that the OrthoBoundary algorithm achieves an average accuracy of 90.10%, outperforming the Area-Length (86.60%), SSD (76.01%), ESD (87.24%), and OP (88.07%) methods. Notably, the OrthoBoundary algorithm also exhibits processing speeds approximately 120 times faster than other considered methods while demonstrating improved robustness and user-friendliness. It has significant potential to quantify and assess the severity of pavement cracks, as well as to facilitate maintenance decision-making processes in road infrastructure management systems.
Article
This paper articulates the fund allocation and prioritization challenge for existing building facilities' Maintenance and Repair (M&R) interventions. Despite the existing conspicuous research in funding allocation, there is a dearth of comprehensive insight into which facilities necessitate maintenance, when to intervene, and the strategies to adopt. Integrating Building Information Modeling (BIM) and Facility Management data for fund allocation decisions remains challenging, and existing models lack a thorough portrayal of as-is information for M&R interventions and funding allocation. To this end, this study proposes an integrated BIM-based fund allocation and prioritization framework to bridge these gaps. The framework incorporates as-is BIM semantic knowledge and utilizes a genetic algorithm optimization scheme to determine interventions based on condition state, future deterioration rate, and relative weighted importance. The research objectives include constructing an as-is BIM model, formalizing as-is knowledge using BIM standards, allocating funding efficiently, and determining optimum timing and type of interventions. The developed framework is enacted in a real-case scenario for validation and verification, and findings unveiled the proposed paradigm capabilities as a decision support system for fund allocation and prioritization, enabling facility managers to constitute informed decisions based on building facilities' as-is information, deterioration rates, and relative importance weight.
Article
Full-text available
Accurate digital representation of indoor facilities is a key component for the generation of building twins. 3D indoor scenes are often reconstructed from 3D point clouds obtained by various measurement techniques, which usually show different accuracy characteristics. During the reconstruction process, the uncertainties of data and intermediate products propagate into the accuracy of the vectorized model. Although point clouds-based 3D building modeling has been a hot topic of research for at least two decades, a thorough analysis of error propagation for this problem from a geodetic point of view is still underrepresented. In this contribution, we propose an analytical approach to estimate the uncertainty of 3D modeling results using the analytic approach based on first-order Taylor-series expansion. A general model for the input data is established and the uncertainty expressions of all computed products are symbolically derived. We estimate the uncertainty of 3D data fitting, followed by the derivation of vectorized building parameters and their covariance matrices. The results of the theoretical approaches are tested on real data presenting an indoor scene. The practical example is illustrated, thoroughly analysed, and quantified.
Article
Full-text available
3D building modeling is a diverse field of research with a multitude of challenges, where data integration is an inherent component. The intensively growing market of BIM-related consumer applications requires methods and algorithms that enable efficient updates of existing 3D models without the need for cost-intensive data capturing and repetitive reconstruction processes. We propose a novel approach for semantic enrichment of existing indoor models by window objects, based on amateur camera RGB images with unknown exterior orientation parameters. The core idea of the approach is the parallel estimation of image camera poses with semantic recognition of target objects and their automatic mapping onto a 3D vector model. The presented solution goes beyond pure texture matching and links deep learning detection techniques with camera pose estimation and 3D reconstruction. To evaluate the performance of our procedure, we compare the estimated camera parameters with reference data, obtaining median values of 13.8 cm for the camera position and 1.1° for its orientation. Furthermore, a quality of 3D mapping is assessed based on the comparison to the reference 3D point cloud. All the windows presented in the data source were detected successfully, with a mean distance between both point sets equal to 3.6 cm. The experimental results prove that the presented approach achieves accurate integration of objects extracted from single images with an input 3D model, allowing for an effective increase of its semantic coverage.
Article
Full-text available
This paper reports the results of the ISPRS benchmark on indoor modelling. Reconstructed models submitted by 11 participating teams are evaluated on a dataset comprising 6 point clouds representing indoor environments of different complexity. The evaluation is based on measuring the completeness, correctness, and accuracy of the reconstructed wall elements through comparison with manually generated reference models. The results show that the performance of the methods varies across different datasets, but generally the reconstruction methods achieve better results for the point clouds with higher accuracy and density and fewer gaps, as well as the point clouds representing less complex environments. Filtering clutter points in a pre-processing step contributes to higher correctness, and making strong assumptions on the shape of the reconstructed elements contributes to higher completeness and accuracy for models of Manhattan World environments.
Article
Full-text available
Creating high‐level structured 3D models of real‐world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up‐to‐date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.
Article
Full-text available
Digital building representations enable and promote new forms of simulation, automation, and information sharing. However, creating and maintaining these representations is prohibitively expensive. In an effort to make the adoption of this technology easier, researchers have been automating the digital modeling of existing buildings by applying reality capture devices and computer vision algorithms. This article is a summary of the efforts of the past ten years, with a particular focus on object recognition methods. We rectify three limitations of existing review articles by describing the general structure and variations of object recognition systems and performing an extensive and quantitative comparative performance evaluation. The coverage of building component classes (i.e. semantic coverage) and recognition performances are reported in-depth and framed using a building taxonomy. Research programs demonstrate sparse semantic coverage with a clear bias towards recognizing floor, wall, ceiling, door, and window classes. Comprehensive semantic coverage of building infrastructure will require a radical scaling and diversification of efforts.
Article
Automatic construction progress documentation and metric evaluation of execution work in confined building interiors requires particularly reliable geometric evaluation and interpretation of statistically uncertain as-built point clouds. This paper presents a method for high-resolution change detection based on dense 3D point clouds from terrestrial laser scanning (TLS) and the discretization of space by voxels. In order to evaluate the metric accuracy of a BIM according to the Level of Accuracy (LOA) specification, the effects of laser range measurements on the occupancy of space are modeled with belief functions and evaluated using Dempster and Shafer's theory of evidence. The application is demonstrated on the point cloud data of multi temporal scanning campaigns of real indoor reconstructions. The results show that TLS point clouds are suitable to verify a given BIM up to LOA 40 if special attention is paid to the scanning geometry during the acquisition. The proposed method can be used to document construction progress, verify and even update the LOA status of a given BIM, confirming valid and BIM-compliant as-built models for further planning.
Article
The demand for the models of BIM has increased due to the numerous applications in emergency response and location-based services. Creating a semantically rich 3D interior model has traditionally been a very manual process. Although the latest LiDAR scanning and photogrammetry techniques efficiently capture the existing building, the automatic reconstruction of complete volumetric and functional interior models is still an intensive and challenging research topic. This paper presents a novel parametric modeling method for reconstructing semantic volumetric building interiors from the unstructured point cloud of a building. Unlike existing partitioning-based methods, our proposed method overcomes the limitations by providing a flexible framework for combining 3D Deep Learning and an improved morphological approach for inverse BIM modeling. By employing a flexible and robust three-step mechanism, the proposed method first classifies the point cloud into thirteen types using a 3D Deep Learning method, then extracts the surfaces and generates the initial feature space. In regularizing the space and modeling BIM, the space boundaries are optimized by formulating the subordination between the grid cells generated by walls and the segmented boundary shapes generated by the gridded floor plan through energy minimization. Then, a grammar-enhanced point-line polygon and parametric description are designed to generate the final BIM model. By using a robust space partitioning and optimization method, three main goals are achieved: recovery of geometric and semantic information, robustness in different data sources (especially in non-Manhattan scenes), parametric BIM modeling in our proposed method. The paper demonstrates the potential of our algorithms for both RGB-D and LiDAR point clouds acquired from scenes in Manhattan and outside Manhattan. Experimental results show that this method is capable of automatically generating a geometrically and semantically consistent BIM model that is competitive with the existing method in terms of geometric accuracy and model completeness.
Article
Accurate as-built information is required to operate, maintain, and adapt existing buildings. Scan-to-BIM has become a feasible approach for collecting and modelling 3D as-built information and has three phases: (1) scanning, (2) registration, and (3) modelling. This paper focuses on the modelling phase, which can currently be conducted either manually or semi-automatically. As-built conditions of a building are surveyed, and the geometry is modeled in a series of modelling scenarios. For each trial, geometric dimensions of the BIMs are compared to ground truth dimensions. This paper assesses the impact of levels of automation and modeller training on the accuracy and precision of generated BIMs. Quantitative models are developed for modelling scenarios using empirical datasets. Lastly, the impacts of degrees of accuracy are discussed. This study provides insight into the dimensional certainty of BIMs generated by Scan-to-BIM and helps decision-makers assess the risk of decisions made based on this information.
Article
We propose a novel approach to automatically reconstruct the floorplan of indoor environments from raw sensor data. In contrast to existing methods that generate floorplans under the form of a planar graph by detecting corner points and connecting them, our framework employs a strategy that decomposes the space into a polygonal partition and selects edges that belong to wall structures by energy minimization. By relying on a efficient space-partitioning data structure instead of a traditional and delicate corner detection task, our framework offers a high robustness to imperfect data. We demonstrate the potential of our algorithm on both RGBD and LIDAR points scanned from simple to complex scenes. Experimental results indicate that our method is competitive with respect to existing methods in terms of geometric accuracy and output simplicity.