ArticlePDF Available

Mapping and modelling defect data from UAV captured images to BIM for building external wall inspection

Authors:

Abstract

With the increase of service years, external walls of high-rise buildings tend to suffer from a variety of defects which impose great safety risks. Traditional methods for inspecting high-rise building external walls require inspectors to work at height and identify defects manually, which is dangerous and inefficient. In recent years, there has been an increasing trend of using unmanned aerial vehicles (UAV) for inspecting building external walls, but how to manage the information obtained by the UAV is still a problem. In addition, although building information modelling (BIM) with rich geometric and semantic information has been applied in the construction engineering industry, BIM models usually lack updated condition data of facilities. Therefore, this paper presents a method for managing the inspection results of building external walls by mapping defect data from UAV images to BIM models and modelling defects as BIM objects. First, images of building external walls obtained by UAV are processed and useful information such as coordinates are extracted. Considering the small scale of single buildings, a simplified coordinate transformation approach is developed to transform location of real-world defects to coordinates in the BIM model. Meanwhile, a deep learning-based instance segmentation model is developed to detect defects in the captured images and extract their features. In the end, the identified defects are modelled as new objects with detailed information and mapped to the corresponding location of the related BIM component. To validate the feasibility, the proposed method has been applied to a real office building, which successfully mapped and integrated the defects of external walls with the BIM model. This study is applicable to both buildings and infrastructure, and is expected to facilitate structure inspection and decision making in maintenance with integrated data of as-is condition and as-built BIM.
1
Mapping and Modelling Defect Data from UAV Captured Images to
BIM for Building External Wall Inspection
Yi Tana, Geng Lia, Ruying Caia, Jun Mab, Mingzhu Wangc*
a Key Laboratory for Resilient Infrastructures of Coastal Cities (Shenzhen University), Ministry of
Education
b Department of Urban Planning and Design, University of Hong Kong, Hong Kong, China.
c School of Architecture, Building and Civil Engineering, Loughborough University, United Kingdom.
*Corresponding author
Abstract
With the increase of service years, external walls of high-rise buildings tend to
suffer from a variety of defects which impose great safety risks. Traditional methods
for inspecting high-rise building external walls require inspectors to work at height and
identify defects manually, which is dangerous and inefficient. In recent years, there has
been an increasing trend of using unmanned aerial vehicles (UAV) for inspecting
building external walls, but how to manage the information obtained by the UAV is still
a problem. In addition, although building information modeling (BIM) with rich
geometric and semantic information has been applied in the construction engineering
industry, BIM models usually lack updated condition data of facilities. Therefore, this
paper presents a method for managing the inspection results of building external walls
by mapping defect data from UAV images to BIM models and modelling defects as
BIM objects. First, images of building external walls obtained by UAV are processed
and useful information such as coordinates are extracted. Considering the small scale
of single buildings, a simplified coordinate transformation approach is developed to
transform location of real-world defects to coordinates in the BIM model. Meanwhile,
a deep learning-based instance segmentation model is developed to detect defects in the
captured images and extract their features. In the end, the identified defects are
modelled as new objects with detailed information and mapped to the corresponding
location of the related BIM component. To validate the feasibility, the proposed method
has been applied to a real office building, which successfully mapped and integrated
the defects of external walls with the BIM model. This study is applicable to both
buildings and infrastructure, and is expected to facilitate structure inspection and
decision making in maintenance with integrated data of as-is condition and as-built
BIM.
Keywords: Unmanned aerial vehicle (UAV ); Building information modelling (BIM);
2
Coordinate transformation; Mapping; Deep learning; Defect detection; Building
external wall; Wall inspection;
1. Introduction
With the development of society, high-rise residential buildings and office
buildings have become an indispensable part of modern architecture. However, with
the increase of the service life of some buildings, there are some hidden safety risks
such as cracking, falling off of the external walls, which will seriously endanger public
safety and cause economic losses. If these safety problems are not identified early, the
health condition of buildings will deteriorate rapidly, leading to disastrous
consequences [1]. Therefore, it is very important to carry out safety inspection and
management of the defects of the external walls, so that effective maintenance measures
can be taken in time, which is very important for eliminating potential safety hazards
and prevent casualties and economic losses. At present, external walls of buildings are
mostly inspected manually, where the inspectors need to work at height and identify
defects manually. Such manual inspection approach suffers from high subjectivity and
risk, as well as low accuracy, leading to inefficiency in building inspection and
maintenance [2]. In addition, manual inspection requires more labor, time, and
consequently higher cost. In order to realize safe and efficient inspection and to manage
the defects of the external walls, it is essential to develop a highly automated inspection
and management method.
In recent years, the research and development of unmanned aerial vehicle (UAV)
have made a great progress. UAVs are able to reach many places like bridges [3,4],
wind turbine rotor blades [5], tunnels [6,7] and offshore [8,9], which are difficult and
risky for human beings to reach. More importantly, UAVs can be equipped with sensors
such as cameras and lasers to capture data remotely, eliminating manual data collection
by humans in dangerous environments. Furthermore, the low installation and
maintenance costs and the rapid deployment in most open areas make UAVs an
attractive consideration in many industry branches, including emergency services,
precision agriculture and nature preservation [10]. Due to such characteristics, there
have been several studies based on UAV for inspecting external walls as well. Pan et al.
[1] and Peng et al. [11] used UAV equipped with infrared sensor to detect the building
external walls, which avoided manual operations at height to prevent personnel danger.
Nevertheless, most studies focused on automated inspection of wall condition based on
data captured by UAV, but few of them investigate the methods to efficiently manage
the collected or interpreted data after inspection.
Building information modeling (BIM) is the digital expression of the physical and
3
functional characteristics of a facility [12], which is shared and transmitted in the whole
life cycle of project planning, operation and maintenance through the integration of
building data and information model. As a variety of data are commonly generated or
related to different activities during project lifecycle, integrating the data with BIM has
been recognized as one important way to better manage the data and utilize them for
project management. Previous studies have investigated integration of BIM models
with different types of data, such as safety monitoring data, structural geometry, and
material property [13], to effectively manage them throughout the project. Similarly,
mapping or integrating condition data to BIM models will potentially provide a more
comprehensive understanding of the condition of specific components and facilitate
decision making in maintenance planning. However, most of the detected defect
information is only used to generate reports, and is not stored in the digital model for
systematic and object-oriented management. At the same time, although BIM has been
widely applied in management, there is still little mapping between the real world and
BIM model to manage the real-time state of buildings or structures. Combining the
object-oriented management of BIM with the real buildings or structures can further
promote the application of nondestructive testing and further improve the detection
efficiency and management efficiency. Therefore, it is necessary and feasible to make
use of the functional advantages of object-oriented management in BIM to map the
detection results in reality to the BIM model for integrated management.
Based on the above ideas, this paper proposes a method for integrating inspection
data from UAV with BIM models to locate and manage the defects of building external
walls. The proposed method firstly captures images of building external walls using
UAV and developed a deep learning-based detection approach to automatically detect
the defects in images, based on which defect features, e.g. location and size, are
obtained. Afterwards, a coordinate transformation method is developed to transform the
real-world coordinate of defects in images to BIM coordinate. In the end, all the
detected defects are mapped into corresponding BIM components, along with specific
condition data. The fusion of as-is condition data with BIM model can help inspect
building condition and facilitate maintenance planning.
After the introduction, this paper will unfold as follows: Section 2 reviews related
literature in this area. Section 3 describes the proposed defect detection and mapping
method and key steps. Section 4 demonstrates the feasibility of the presented method
through a case study. The last section arrives a conclusion and puts forward the
limitations and avenues of future work.
4
2. Literature Review
2.1 UAV-based inspection of building and infrastructure
UAV has great potentials to renovate the practice of safety management in various
industries, including civil engineering. Freimuth et al. [10] developed an application
that allows the operator to set the flight path and inspection plan of the UAV around the
building in the program interface, and preview the photos in the virtual environment to
reduce the risk of false inspection of photos. Melo et al. [14] investigated the feasibility
of applying UAV for safety inspection during construction, developed a program for
on-site safety inspection. These inspections use UAVs to take photos, but then operators
are also required to check and analyze the photos. Bolourian et al. [15] investigated the
use of UAV equipped with LiDAR scanner as an effective tool for bridge inspection.
Garilli et al. [16] analyzed a Convolutional Neural Network (CNN), based on UAV
photogrammetry, to detect pavement's pattern. Perry et al. [17] used UAV to capture
images of bridge, generate a 3D point-cloud model to record defect information. Chen
et al. [18] studied the use of UAV flying around building and photographing facades,
and used GIS to create 2D models of building facades to contribute to UAV facades
inspection. Other researchers also studied the application of UAV to mountainous
terrain detection [19,20], vegetation detection [21] and power grid detection [22]. Tan
et al. [23] collects the images of the external wall by optimizing the flight path of UAV,
so as to further improve the automation of building external wall surface detection.
These studies mainly focus on using UAV cameras to replace human eyes for field
operation, which can reduce the risk of manual inspection.
The above study shows that UAVs have been utilized to help manual inspection in
unsafe or hard-to-reach places, which can improve the efficiency and safety. However,
limited studies included the methods to effectively managing the obtained inspection
data after performing the inspection.
2.2 Coordinate transformation
After the inspection results are obtained, mapping the inspection data into the
digital model is a potential way to manage the inspection results and take subsequent
measures more efficiently. During the mapping process, coordinate transformation is
an important step to transfer location of real-world inspection data (e.g. defects) to that
of the digital model.
Coordinate transformation is the location description of spatial entity, which is the
process of transforming from one coordinate system to another by establishing the
corresponding relationship between two coordinate systems [24]. Coordinate
5
transformation is mainly applied in surveying and mapping, mechanical engineering
and other fields [25,26]. The use of the Helmert transformation was investigated to
improve the quality of a terrestrial reference frame in terms of coverage, and density
[27,28]. Some researchers proposed closed-form transformation from geocentric
coordinates to geodetic coordinates and ellipsoidal coordinates, which can be applied
to any point on the earth[29,30]. In addition, algorithms were proposed for the
transformation of Cartesian coordinates into geodetic coordinates[31,32]. Wu et al. [33]
compared the total least squares and least squares for four- and seven-parameter model
coordinate transformation while Yan et al. [34] proposed a design of a wave shifter with
the exit direction controllable based on coordinate transformation theory. Moreover,
Yang et al. [35] investigated a blade imbalance fault diagnosis of doubly fed wind
turbine based on current coordinate transformation. Gu et al. [36] proposed the
coordinate transformation algorithm under arbitrary rotation parameters and applied in
high speed railway measurement.
So far, the application of coordinate transformation to the detection of construction
engineering is still limited, especially in the inspection of a single building. Therefore,
this study proposes applying coordinate transformation to fusing defect data of building
external walls with BIM model, so that the data can be better managed and utilized.
2.3 BIM-based management
BIM can provide comprehensive data of each component as well as 3D
visualization, hence can greatly improve building management efficiency. BIM-based
applications or systems have been developed for different types of tasks in building
management. Cortés-Pérez et al. [37] investigated BIM-integrated management of
occupational hazards in building construction and maintenance, which can improve the
working conditions at construction sites and during building maintenance. Nguyen et
al. [38] proposed to decompose and manage the volume of high-rise building wall by
BIM to improve the construction quality, reduce the errors in engineering quantity
acceptance and management, and accelerate and promote the success of the project. Al-
Kasasbeh et al. [39] proposed an integrated decision support system for building asset
management based on BIM to address the systematization and coordination of lifecycle
data. Álvarez et al. [40] explored BIM-based management to better control and
supervise road pavement and refurbishments at airports. Kassem et al. [41] and Patacas
et al. [42] established a requirements model for implementing BIM for facilities
management, and investigated the value of BIM and the challenges affecting its
adoption in facilities management. Godinho et al. [43] described the development of
BIM and recorded key parameters to enhance the capabilities of BIM as a useful
6
decision support tool within the heritage management framework. Chapman et al. [44]
proposed using BIM for underground applications to solve the problem of poor
underground information acquisition. Choi et al. [45] developed a formalized schema
to accomplish the BIM-based benchmarking for healthcare projects. Kwon et al. [46]
utilized BIM and augmented reality to develop the defect management system for
reinforced concrete work. Wójcik et al. [47] used RGB-D camera to obtain the surface
defect information of the bridge and embed this information into the BIM model. Xu et
al. [48] used laser scanning to obtain the defects of prefabricated construction and
integrated the quality information into the BIM model.
It can be seen that there are limited studies on the integration of BIM with other
types of information, especially condition or defect information, which can be useful
for facility maintenance. There is also a lack of research in BIM management of
inspection results as well.
3. The Proposed Defect Detection and Mapping Framework
Fig. 1 shows the process framework of the inspection and management of building
external wall defects based on automatic acquisition of image information by UAV
photogrammetry and integration with BIM. The proposed framework involves: 1)
preparatory work and data acquisition and processing, 2) coordinate transformation, 3)
detection and localization of defects, and 4) mapping the identified defects to BIM
model.
7
Fig. 1. The proposed defect detection and mapping framework
3.1 Preparatory work and data acquisition
3.1.1 Preparatory work
The preparatory work includes three phases. Phase 1 is preliminary preparation,
which involves: 1) registering the UAV and obtaining the flight permission from the
competent department, 2) ensuring that the inspected building is not in the no-fly zone,
3) visiting and investigating the site conditions to assess the possible risk factors, such
as potential flight obstacles, electromagnetic interference, weak satellite signal, wind
exposure conditions, etc., and to make corresponding countermeasure. Phase 2 is
setting the flight plan, which includes: planning the flight path of the UAV, setting the
location to capture the images, the take-off and landing point, the perpendicular distance
between the UAV and the target, etc. Among them, the perpendicular distance between
UAV and target is the key because it will affect the size of the FOV, and the calculation
of the location. The perpendicular distance is determined according to the environment
factors of the site, the constraints of UAV and the practicability of the image data.
In addition, the selection of UAV must meet a series of requirements, namely:
suitable size for the environment, stable flight capability, obstacle proximity sensors,
positioning accuracy via RTK system, possibility of automatic flight, compatibility with
different type of cameras or equipment, high autonomy, high-resolution cameras and
8
high storage capacity [49].
Finally, the field of view (FOV) of the UAV camera, which is one of the keys to
cover the whole target detection wall, needs to be calculated. The perpendicular
distance between the UAV and the detected wall has been determined, which affects the
size of the FOV. Moreover, the basic parameters of the UAV camera also need to be
obtained, including the size and focal length of the camera sensor. An example of the
FOV is shown in Fig. 2 (a), while the relationship between parameters of the UAV
camera and the FOV is shown in Fig. 2 (b) and Fig. 2 (c) respectively.
Fig. 2. (a) The FOV of the UAV camera,
(b), (c) Relationship between parameters of the UAV camera and the FOV
The width and height of the FOV can be calculated using Eq.(1), and Eq. (2)
respectively.

(1)
9

(2)
where, Fw is the width of the FOV, Fh is the height of the FOV, Sw is the width of the
camera sensor, Sh is the height of the camera sensor, D is the perpendicular distance
between the UAV camera and the wall, and f is the focal length of the camera.
Phase 3 of the preparatory work is processing the BIM model of the detected
building according to the FOV of the camera. In the BIM model, the detected external
wall is segmented according to the FOV and the center point of each segmented area is
extracted using Dynamo, a visual programming application for Revit that integrates
different functional modules to access BIM data. The Dynamo-based extraction is
shown in Fig.3. First, the “Select Faces”, a function module in Dynamo, is used to select
the external walls (“Surface”) of BIM model, and the four corner points of the wall are
extracted by “Curve. StartPoint”. Then, a function module “Select BasePoint” is created
by the authors through programming to select one of the corners as the base point of
the model's external wall. After that, the size of the wall is obtained through “Element.
GetParameterValueByName” along with the “Geometry.Translate”. Finally, the center
point of the segmented area is obtained by “Polygon.Center”, as shown in Fig 4.
Fig.3. The workflow of using Dynamo to extract center point of segmented wall area
10
Fig. 4. Process of obtaining the center point of the segmented area
3.1.2. Data acquisition
After the preparatory work, UAVs are used to capture data of building external
walls. In the process of photographing the target wall, the flight plane of the UAV is
always parallel to the wall, and the camera lens carried by the UAV is kept directly
towards the wall, which is to prevent the image deviation caused by the top or bottom
view of the lens and the offset of the corresponding center point. The collection of
images must be performed safely, but the camera and the wall should be kept as close
as possible since shorter distances improve image resolution. An example of the
location of the UAV photographing points and the center point of the corresponding
wall in the FOV are illustrated in Fig. 5.
11
Fig. 5. The location of the UAV and the center of the wall
After the UAV capturing data of all the target walls at the photographing points of
the preset path, the captured images are stored in a file in the format of JEPG in the
order of photographing. These images all have EXIF (Exchangeable Image File) that is
used to record the attribute information and photographing data of the images, such as
resolution, GPS data, camera specifications, etc. Then, an algorithm is developed to
automatically extract related information of images by reading the images in the file
and retrieving the image’s serial number and GPS data, including longitude, latitude
and altitude. The extracted information will automatically generate a table of coordinate
data (i.e. the GPS information) in sequence. However, the generated GPS data only
represent the photographing location for individual points. It is necessary to calculate
the coordinate of the center point of the wall in each FOV that is the segmented area
wall according to the obtained UAV GPS data and the relevant position relationship.
3.2 Coordinate transformation
The coordinate transformation process of this study is as follows: Firstly, the
obtained coordinates of UAV images are transformed to the corresponding coordinates
of the center point of the segmented wall area. The above transformation is in the WGS-
84 coordinate system. Then, using our proposed method, the coordinates of the center
point of the segmented wall area in WGS-84 are transformed to plane coordinates,
which are then transformed to BIM coordinates, as shown in Fig. 6.
12
Fig. 6. The proposed process for transforming coordinates in UAV images to
coordinates in BIM models
3.2.1. Calculation of the location of the wall
A method is proposed to obtain the location of the center point of the segmented
area wall, using the following steps.
Step 1: a UAV photographing point “A” with known GPS data is selected, and the
location of the wall perpendicular to the photographing point “B” is identified.
Afterwards, the longitude and latitude of point “B” on the wall are measured using the
GPS locator. The purpose of obtaining the location information of the corresponding
two points is to calculate the location relationship. As shown in Fig. 7, A is the location
of the UAV, B is the location of the point on the wall corresponding to the UAV, where
line AB is perpendicular to the wall.
Fig. 7. An example of the UAV location point “A”, and segmented area wall center
point “B”
Step 2: after obtaining the coordinates of “A” and “B”, as well as the perpendicular
distance between the UAV and the wall, the azimuth between the flight plane of the
UAV and the wall is calculated, using Eq.(3), Eq. (4), and Eq. (5).
13
     
  
(3)
 
(4)
  arc 

(5)
Where, is the azimuth, ,  are the longitude and latitude of point “A”
respectively,   are the longitude and latitude of point “B” respectively.
Step 3: as the flight plane of the UAV is always parallel to the wall, both the
perpendicular distance and the angle between each photographing point of the UAV and
the wall is the same, hence the distance between the UAV and the wall remains
unchanged, and the camera lens carried by the UAV is always facing the wall. Therefore,
the coordinates of the center of the wall in the FOV of each photographing point is
deduced from the known distance between the UAV and the wall as well as the
calculated azimuth. The coordinates of the segmented area wall center point are
calculated by Eq. (6) - (10).
 

(6)
     

(7)
  

(8)

(9)
 
(10)
Where, L is the distance between the UAV and the wall, is the azimuth between
the UAV and the wall, R is the average radius of the earth, which is equal to 6,371,393
meter,  ,  are longitude and latitude of the photographing point of the UAV
respectively,  ,  are the longitude and latitude of the center point of the
segmented area wall respectively. It should be noted that in the above calculation, all
the degree-minute-second formats are converted to degrees.
Step 4: Since the calculation of the location relationship between the UAV and the
wall is based on longitude and latitude, and does not involve altitude, in the comparison
between the data table of UAV coordinates and the data table of wall coordinates only
shows the transformation of longitude and latitude, and the same of altitude. In
surveying and mapping, the altitude is usually obtained by joint survey of leveling
14
points and trigonometrical stations in local area, performed by the survey department.
In this research, the altitude of each photographing point is known from the BIM model.
Therefore, according to the known altitude of each photographing point, the altitude of
wall coordinate data is assigned in order. The final GPS coordinates of each center point
of the segmented area wall are the same as those in practice.
To automate the above steps, an algorithm is developed to perform calculations
automatically, after which the calculated wall coordinates are generated.
3.2.2 WGS-84 to plane coordinate
As different coordinate systems are used by the UAV and BIM models, an
approach is required for coordinate transformation. The UAV uses the WGS-84
geodetic coordinate system, which is the standard model of the earth used in GPS. The
schematic diagram of the WGS-84 geodetic coordinate system is shown in Fig. 8 [50].
The origin of the WGS-84 coordinate system is at the Earth's Center of Mass, the Z-
axis points to the IERS Reference Pole that is same to the Conventional Terrestrial Pole
(CTP) defined by BIH1984.0, the X-axis points to the intersection of the IERS
Reference Meridians and the CTP equator. On the other hand, BIM model is generally
created with a local coordinate system by defining a project survey point as the
reference point. Therefore, in order to realize the interaction between the BIM model
and the real-world building, as well as to map the wall defects into BIM model for better
management, this study proposes an approach to transform the WGS-84 coordinate
system into the BIM model coordinate system. The approach first transforms longitude
and latitude coordinates of the real-world wall to plane coordinates that are then
transformed to BIM model coordinates.
Fig. 8. The schematic diagram of WGS-84 geodetic coordinate system
15
In the transformation of WGS-84 coordinate system to BIM model coordinate
system, the first step is to transform the longitude and latitude into X and Y in the plane
rectangular coordinate system. In the domain of surveying and mapping, this
transformation process is very complicated, which requires transforming the WGS-84
coordinate system to country or region coordinate system, and then to the projection
coordinate. Gauss-Kruger projection is often used in mapping to the plane rectangular
coordinate system, but the transformation formula of standard Gauss-Kruger projection
is very complex. Moreover, the surface of the earth is a simplified ellipsoid surface that
cannot be directly expanded into a flat surface, hence the longitude and the latitude are
radian and are not parallel to each other, which makes the transformation more difficult.
Such method is generally used in topographic mapping and drawing large-scale maps.
Considering that the targets of this research are buildings, the area of which is relatively
small, it is possible to use the straight line instead of the curve of longitude and latitude.
Therefore, a simplified coordinate transformation method is developed.
In the proposed simplified model, since it is within a small range, both the
longitudes and the latitudes of adjacent points can be set to be parallel to each other. A
plane rectangular coordinate system is specified: the X-axis is a line along the direction
of the latitude; The Y-axis is the line along the direction of the longitude. Then, a known
point on the wall () is determined as the origin of the plane rectangular coordinate
system, and the X and Y coordinates of any other point can be determined according to
its distance from the origin in longitude and latitude. The transformation of the
longitude and latitude coordinates of into plane rectangular coordinates ()
is realized by Eq. (11), (12).


(11)


(12)
The transformation of the longitude and latitude of points on the same
longitude with into plane rectangular coordinates  is shown in Eq. (13),
(14).


(13)


(14)
16
The transformation of the longitude and latitude of points on the same latitude
with into plane rectangular coordinates is shown in Eq. (15), (16) and (17).
  
(15)

(16)

 
(17)
Where, equals to 6,378,137 m, which is the long axis of the ellipsoid model
used in the WGS-84 coordinate system, namely the equatorial diameter of the Earth.
equals to 6,356,752 m, which is the short axis of the ellipsoid model used in the WGS-
84 coordinate system, namely the distance between the two poles of the earth. 
and  is the latitude and the longitude of respectively.
If the latitude or longitude of the origin and the point that to be transformed are
not same, the longitude and latitude of the point can be decomposed and projected onto
the line respectively. As shown in Fig. 9, since the longitude and the latitude of point
“M” are different from the origin “O”, point “M” can be projected onto the same
longitude as origin “O”, i.e. point “P”. Similarly, point “M” can be projected onto the
same altitude as origin “O”, i.e. point “Q”, and then the above calculation is carried out.
Fig. 9. The location relation between point “M” and origin “O”
The above calculation process is automated so that the longitude and latitude of
each point on the wall can be automatically transformed into X and Y coordinates in
the plane rectangular coordinate system, and a new data table can be generated.
3.2.3 Plane coordinate to BIM coordinate
After transforming the longitude and latitude coordinates of the center point of the
segmented area wall into the plane rectangular coordinates, a method is developed to
17
map the plane rectangular coordinate system to the BIM model coordinate system, so
that the two coordinate systems can be connected. Although both the BIM model and
the external wall obtained from the above transformation use the Cartesian coordinate
system, the former utilizes a local project coordinate system. There are differences
between the two, such as relative position translation, rotation and scaling, which means
the plane rectangular coordinates obtained cannot be directly used in the BIM model.
The developed transformation method works as follows.
Firstly, the plane rectangular coordinate system of the external wall is set as
, and the BIM model coordinate system is set as . The coordinates of each
center point of the segmented area wall in the BIM model are extracted. Meanwhile,
the plane rectangular coordinates of the wall points have been obtained in Section 3.2.2.
Consequently, a series of corresponding points with known coordinate data in these two
coordinate systems are identified.
Then, three types of transformations between the two plane rectangular coordinate
systems are performed, namely: translation, rotation and scale transformation. The
specific transformation is realized through Eq. (18).

  
 
 

(18)
Where, 
 is the point in , i.e. the plane rectangular coordinate system of
the external wall. 
 is the point in , i.e. coordinate system of the BIM model.
and is the translation in the direction of X and Y respectively. and is
the scale factor in the direction of X and Y respectively. 
  is the
rotation matrix.
Since Eq. (18) has a total of five parameters, i.e. five unknown, at least three
common points with known coordinates are required. After the specific parameters of
the transformation formula are calculated, the points in the plane rectangular
coordinates of the external wall can be mapped to the BIM model through the equation.
3.3 Detection and localization of defects
To map defects to the BIM model, the first step is to detect and localize defects in
the image, followed by mapping the corresponding location in the BIM model. It is
necessary to firstly identify the defects and their location in the collected images. As
manual detection of defects from images is time-consuming and error-prone, automated
18
interpretation methods using computer vision are desired [51,52]. Compared to
traditional image processing methods, deep learning approaches have obtained
outstanding performance for various computer vision tasks due to their capability of
extracting rich features automatically without manual intervention [53,54]. Therefore,
deep learning-based defect detection approaches are used in this study to detect defects
and extract their features from UAV images. Specifically, a deep learning model is
developed based on an object instance segmentation framework called Mask R-CNN
[55]. The reason of developing our model based on Mask R-CNN is that it can generate
both pixel-level segmentation, the contour, and bounding boxes for each defect, which
means detailed information can be obtained, including defect type, location, number,
features (e.g. area, width, length, etc.). As shown in Fig.10, the workflow of Mask
RCNN is similar to Faster R-CNN [56]. It first extracts image features through several
CNN layers, after which a region proposal network (RPN) is trained to generate region
proposals and then Regions of Interest (RoIs) are produced. The main difference is that
a branch is added in Mask R-CNN to predict segmentation masks for each RoI, the
process of which is parallel to existing classification and bounding box regression. One
example of the defect detection results using Mask R-CNN is shown in Fig. 11. Each
defect is identified with a bounding box indicating its type and general location, and is
segmented with different colors (representing the pixel value) which reflected the
detailed features.
Fig. 10. Workflow of the Mask R-CNN [55]
19
Fig. 11. One example of the defects identified by Mask R-CNN
Following that, the dimension of each image is automatically read. The width of
the image is represented by W, the height of the image is represented by H, and the
upper left corner of the image is taken as the origin, so that the coordinates of the image
center are
,  
. Then, with the detection results from the deep learning
model, the pixel coordinate of the defect contour and the features of the defect (e.g.
length, width and shed area) are extracted and are exported to the database. Specifically,
the exported data included the defect type, x and y coordinates of all the pixels of
representing each defect, and the area of the defects.
With our method, the difference between the pixel coordinates (both x-coordinate
and y-coordinate) of the defect contour and that of the image center point can be
calculated automatically. The relationship between the pixel coordinate of the defect
contour and center point of image is shown in Fig. 12.   , is the difference
in x-coordinate between the pixel coordinate of the defect contour and that of the image,
and   , is the difference in y-coordinate between them. Since the upper left
corner of the image is selected as the origin, the negative of  is used in the
subsequent calculation of the actual position of the defects. Afterwards, the real-world
size and location of the defects are calculated according to the proportion of the image
and the FOV, as introduced in Section 3.1. Specifically, the actual horizontal and the
vertical distance between the pixel coordinate of the defect contour and the center of
20
the FOV are calculated. The calculated real-world location of the defects is also
automatically generated into the database, which will be used for the subsequent
mapping.
Fig. 12. Relationship between the pixel of defect and the center point of image
3.4 Mapping defect data to BIM model
After calculation and coordinate transformation, the data of each defect are
generated in a database, including actual size and location information. An approach is
developed to integrate the data of the identified defects into the BIM model.
In this study, the defects of building external wall in the BIM model are modelled
using custom parameterized families, and then the different defects are represented by
adjusting the family parameters. As each defect is independent from each other and has
different pixel information, the “Metric Generic Model Adaptive” template is selected
to create families of defects. As shown in Fig. 13, two reference points are created on
the operation plane to indicate the location of the identified defect pixel, and are
connected to generate a piece of defect component. Afterwards, the contour of each
defect is modelled by connecting the component between each pair of points. The
created family enables each defect to be modelled as a BIM object with defect features.
During the mapping process, the pixel information of each defect is retrieved from the
database and imported as family parameters for the defect object in BIM. In this way,
both the location and the size parameters of the defects are integrated with the BIM
model, which is convenient to visualize and manage the defects, such as to facilitate
subsequent maintenance.
21
Fig. 13. The created family for modelling each piece of defect
4. Case Study
This study presents a method of integrating external wall defect data with BIM
model. In order to verify the effectiveness of this method for inspecting building
external walls, this study selects two external walls of the laboratory building of the
College of Civil and Traffic Engineering of Shenzhen University, with an area of 23.2
m × 18.8 m. The picture of the building and the BIM model of the laboratory are shown
in Fig. 14 (a) and (b). The occlusion of buildings can lead to the instability of RTK
signal, which has a great influence on the positioning accuracy of UAV. Therefore, this
study applies the DJI Phantom 4 RTK, an industry UAV as shown in Fig. 14, which can
collect clear images and ensure the accuracy of positioning, thus reducing the influence
of environmental factors on the location of laboratory buildings. Main parameters of
the UAV are shown in Table 1.
According to the official information provided by the DJI company, the vertical
error is 1.5 cm + 1 ppm, and the horizontal error is 1 cm + 1 ppm, when the UAV uses
the RTK GNSS. 1ppm means that the error increases by 1mm for every 1 km of the
vehicle movement. As the detection object of this study is the external walls of the
building with a small area (23.2 m × 18.8 m), the error generated by RTK itself can be
ignored. To eliminate the potential impacts of other environmental factors, the data are
collected using the UAV in sunny days without influence from wind.
22
Fig. 14. (a) Real building, (b) BIM model
(c) The UAV for capturing data - DJI Phantom 4 RTK
Table 1. The main parameters of the DJI Phantom 4 RTK
Parameter
Maximum takeoff weight
1391 g
Duration of flight
About 30 minutes
Satellite positioning module
GPS + BeiDou + Galileo
Camera model
FC6310R
Sensor size
13.2 mm × 8.8 mm
Maximum photo resolution
5472×3078 (16:9)
4864×3648 (4:3)
5472×3648 (3:2)
Camera focal length
8.8 mm
Due to the safe regulation for flying the UAV and the image quality requirements,
23
the minimum vertical distance between the UAV and the wall shall not be less than 4
m. Therefore, the distance between the flight path of the UAV and the wall was always
kept at 4 m in this study. The size of the sensor is 13.2 mm × 8.8 mm, the focal length
is 8.8 mm, which can be obtained from the parameters of the DJI Phantom 4 RTK
camera. Hence, the size of the FOV is calculated as 6 m × 4 m from Eq. (1) and (2),
that is, the area of the wall photographed by the UAV each time. Therefore, each wall
is divided into several cells of 6 m × 4 m. Specifically, after calculating the size of the
FOV, the contour of the external walls of the BIM model of the laboratory building is
extracted, as shown in Fig. 15 (a). Then, according to the size of the FOV, the contour
of the external walls is divided into 40 cells by Dynamo, as shown in Fig. 15 (b).
Fig. 15. (a) The extracted contour of the external walls,
(b) The generated FOVs using Dynamo
4.1.Extraction of image data
First of all, according to the preset UAV path plan, the external walls of the
laboratory building are photographed in sequence, and the images are stored in order.
Afterwards, GPS coordinates, image serial number, pixel size information of each
image are automatically extracted by the proposed approach based on Python. The
extracted data are stored in database, part of which is shown in Table 2.
24
Table 2. The initial data extracted from the image
Number
Coordinate
Image size
Latitude
Longitude
Altitude
(m)
Width
(pixel)
Height
(pixel)
1
22.52939756
113.93345568
18.2
960
640
2
22.52939756
113.93396197
18.2
960
640
31
22.52430740
113.93454614
10.6
960
640
32
22.52430740
113.93513032
10.6
960
640
The coordinate obtained above is the position of the UAV, so the azimuth between
the UAV and the wall is measured. Combined with the known distance between the two,
the center point of the segmented area wall is calculated automatically by Python
according to Eq. (6)-(10). Consequently, the calculated data are added to the database,
as shown in Table 3. However, due to the deviation between the altitude value of the
images captured and the actual altitude, the altitude value needs to be corrected. Since
this study captured images according to the preset flight path and sequence, the altitude
values that have been set during flight are successively assigned to each image so as to
realize the altitude data correction. The corrected altitude values are shown in the Table
4.
Table 3. The coordinate of center point of the segmented area wall
Number
Latitude
Longitude
Altitude (m)
1
22.52903783
113.93345568
18.2
2
22.52903783
113.93396197
18.2
31
22.52466713
113.93454614
10.6
32
22.52466713
113.93513032
10.6
25
Table 4. The altitude values of center point of the segmented area wall before and
after correction
Number
Initial altitude (m)
Corrected altitude (m)
1
18.2
16.8
2
18.2
16.8
31
10.6
8.8
32
10.6
8.8
4.2.Transformation of WGS-84 coordinate to plane coordinate
After obtaining the GPS coordinate information of the external walls to be detected,
it is further converted to plane coordinates. As introduced in Section 3.2.2, a simplified
coordinate transformation method was proposed since the targeted building area is
small. In order to verify the accuracy of the proposed coordinate transformation method,
this study selected start and end points of two straight line segments. The longitude and
latitude coordinates of these four points (P1, P2, P3 and P4) and the length of the two
straight line segments were measured in advance. Then, the longitude and latitude
coordinates of the two groups of points were transformed into plane coordinates using
our proposed method, and their distances after transformation were calculated again.
As shown in Table 5, the actual distance measured between point P1 and P2 was 9.96 m,
while the distance calculated after transformation to plane coordinates was 9.95885 m.
Similarly, the measured distance between P3 and P4 was 4.56 m, while the calculated
distance after transformation was 4.55735 m. It can be seen that the errors are in
millimeters, which can be negligible, indicating that the proposed simplified coordinate
transformation method is effective. Therefore, the method is used to transform the
center point coordinates of each segmented area wall.
Table 5. Distance comparison before and after coordinate transformation
GPS coordinate system
Plane coordinate system
Number
Latitude
Longitude
Distance
(m)
X
Y
Distance
(m)
P1
22.52889
113.93345
9.96
5894295.52340
2428612.906988
9.95885
P2
22.52889
113.93340
5894290.96605
2428612.906988
P3
22.52938
113.93340
4.56
5894274.91827
2428662.580697
4.55735
P4
22.52929
113.93340
5894278.73407
2428653.381867
26
4.3.Transformation from plane coordinate to BIM coordinate
After a series of coordinate transformation, all the real-world location information
of the external walls had corresponding points in the BIM model. In the end, the
locations of points were transformed into BIM model coordinates automatically using
the developed method introduced in Section 3.2.3. Part of the generated data is shown
in Table 6.
Table 6. Coordinates of each center point of the segmented area wall in BIM model
Number
X
Y
Z
1
-56867.64568
-33837.85914
16800.0000
2
-51669.14568
-33837.85914
16800.0000
31
-45669.14568
14563.64086
8800.0000
32
-39669.14568
14563.64086
8800.0000
4.4. Integrating defect data with the BIM model
After coordinate transformation, the defects in each image captured by the UAV
were detected automatically using a well-trained deep learning model. Since cracks are
one of the most common defects of the external wall surface, this study trained a deep
learning model based on Mask R-CNN with 17,631 images crack images. 90% of the
data were used for training the deep learning model and 10% for validation. There is no
overlap between the training dataset and the final testing dataset. The loss in the training
process of the deep learning model is shown in Fig. 16, and convergence is finally
achieved. In the end, the trained model was applied to detect cracks and extract their
features in the UAV images of our case study.
27
Fig. 16. The loss in the training process of the deep learning model
Fig. 17 illustrates part of the cracks identified in the image of the external wall of
the laboratory building. Each crack is detected and segmented with bounding box and
probability score. The features (i.e. size and location information) of the cracks are
extracted from the detection results. Part of the pixel coordinates of the cracks is shown
in Table 7. Then, the distance between of the pixel coordinate of the crack contour and
the center point of the segmented wall is calculated. Finally, based on the proportional
relationship between the image size and the actual wall size, the actual size and location
of the crack were calculated and stored in the database.
Fig. 17. Cracks identified using the well-trained deep learning model
28
Table 7. The pixel coordinates of the cracks
Number
x
y
Number
x
y
1
880
175.5
2
365
188.5
879
175.5
364
188.5
878.5
175
363.5
188
878
174.5
363.5
187
Number
x
y
Number
x
y
3
879
61.5
4
550
86.5
878
61.5
549
86.5
877
61.5
548
86.5
876
61.5
547
86.5
Finally, the previously generated defect information was extracted from the
database by Dynamo. According to the location information of the pixels of each crack,
each crack was modelled as a family at the corresponding location in the BIM model,
as shown in Fig. 18 (a). Meanwhile, the width and length parameters of the cracks were
assigned when creating the family. Fig. 18 (b) illustrates such process. Consequently,
all the defects were mapped and generated at their corresponding locations with detailed
information in the BIM model, which can help evaluate condition and plan maintenance
activities. One example of the results in our case study is shown in Fig. 19. The cracks
of the external wall detected in the UAV images are modelled as BIM objects in the
model, as shown in Fig. 19 (a). A comparison between the cracks in the BIM model and
the cracks in the image is shown in Fig. 19 (b). In this way, facility manager can check
defects of specific building components efficiently and combine them with contextual
building information to support decision making in maintenance.
29
Fig. 18. (a) Using Dynamo to extract the coordinates of crack pixels
(b) Using Dynamo to extract the size information of cracks and assigns parameters to
families of cracks
30
Fig. 19. (a) Cracks mapped and modelled in the BIM model,
(b) The comparison between the cracks in the image and that in the BIM model
In this case, the time consumed by the UAV to collect the images of one external
wall of the laboratory building of the College of Civil and Traffic Engineering of
Shenzhen University is 66 seconds, which can effectively improve the detection
efficiency. At the same time, compared with the traditional manual detection method,
this method can reduce the risk of high-altitude operation. Compared with manually
measuring the location of the surface detects and evaluating the size of the surface
detects, the deep learning model and the mapping method proposed in this study can
accurately identify the defects to the level of centimeter, so as to obtain more accurate
results. More importantly, the defect information from each inspection is stored and
large amount of defect data over a long period of time can be obtained when regular
detections are performed. Based on the defect data at different detection time, the
deterioration patterns of the defects can be analyzed and modelled, which can help
predict the future development of defects and enable preventative measures to avoid
structural failure.
5. Conclusions
This study proposed an intelligent method to detect defects of building external
31
walls and map defects to the BIM model for management. UAVs are utilized to capture
the as-is condition of building walls and related information is extracted from the
captured images. Afterwards, a simplified coordinate transformation approach is
proposed to transform location of defects to coordinates in BIM model. Meanwhile, a
deep learning-based detection model is developed to automatically identify and localize
defects in the image. In the end, defects are automatically modelled as BIM family
objects using Dynamo. The modelled defects are attached with specific information and
visualized at the corresponding location in the BIM model, allowing inspectors to
efficiently evaluate structure condition and plan maintenance works.
The contributions of this study are as follows: (1) a generic framework is proposed
for defects detection and management of building external walls based on UAV and
BIM, which is also applicable for assessment of other structures, (2) a coordinate
transformation method is developed to map real-world data to BIM model with a
relatively high accuracy. Consequently, the surface defects of the building external wall
are mapped to the BIM model. The method can not only be applied to single buildings,
but also to other infrastructure, and (3) surface defects of structures are modelled as
BIM objects with detailed features, which would enable object-oriented management
of BIM to be applied to systematically manage the defects, and facilitate wall condition
assessment and support decision making in maintenance planning.
Nevertheless, the presented method still has limitations: (1) the method has
specific requirements for building shapes. The walls to be tested should be flat, without
arcs or concave and convex, and only rectangular buildings with 90 degree angles are
considered. (2) the occlusion of RTK signal needs to be considered when acquiring
defect location, which has requirements on the location and environment of buildings.
Once the RTK signal cannot be accepted by the building itself or the surrounding
environment, the positioning accuracy will be greatly reduced. Therefore, the focus of
the future work will be to study how to deal with various shapes of building exterior
walls for defects detection and positioning, and how to improve the RTK signal
receiving ability of UAV in the case of environmental interference technology.
Acknowledgement
This research is supported by Foundation for Distinguished Young Talents in
Higher Education of Guangdong, China (FDYT) (No. 2020KQNCX060) and the
Foundation for Basic and Applied Basic Research of Guangdong Province (No.
2020A1515111189). The authors also thank Kairui Zheng for helping to create the BIM
model of the laboratory building of College of Civil and Transportation Engineering,
32
and Silin Li for setting the UAV flight path.
References
[1] N.H. Pan, C.H. Tsai, K.Y. Chen, J. Sung, Enhancement of external wall decoration material for
the building in safety inspection method. Journal of Civil Engineering and Management, 2020.
26(3): pp. 216-226, https://doi.org/10.3846/jcem.2020.11925.
[2] S. Jung, S. Song, P. Youn, H. Myung, Multi-layer coverage path planner for autonomous
structural inspection of high-rise structures. in 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), 2018, https://doi.org/10.1109/IROS.2018.8593537.
[3] T. Ikeda, S. Yasui, M. Fujihara, K. Ohara, S. Ashizawa, A. Ichikawa, A. Okino, T. Oomichi, T.
Fukuda, Wall contact by Octo-rotor UAV with one DoF manipulator for bridge inspection, in
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, A. Bicchi and A.
Okamura, Editors. 2017. pp. 5122-5127, https://doi.org/10.1109/IROS.2017.8206398.
[4] K.H. Lee, N. Byun, D. Shin, A Damage Localization Approach for Rahmen Bridge Based on
Convolutional Neural Network. KSCE Journal of Civil Engineering, 2020. 24(1): pp. 1-9,
https://doi.org/10.1007/s12205-020-0707-9.
[5] D. Denhof, B. Staar, M. Lutjen, M. Freitag, Automatic optical surface inspection of wind
turbine rotor blades using convolutional neural networks, in 52nd CIRP Conference on
Manufacturing Systems, P. Butala, E. Govekar, and R. Vrabic, Editors. 2019. pp. 1166-1170,
https://doi.org/10.1016/j.procir.2019.03.286.
[6] D. Li, W. Yang, C. Yu, F. Qiao, Y. Tian, Visual post estimation of underground UAV based on
deep neural network method. Journal of China University of Mining & Technology, 2020.
49(4): pp. 798-806, https://doi.org/10.13247/j.cnki.jcumt.001176.
[7] C.H. Tan, M. Ng, D.S.B. Shaiful, S.K.H. Win, W.J. Ang, S.K. Yeung, H.B. Lim, M.N. Do, S.
Foong, A smart unmanned aerial vehicle (UAV) based imaging system for inspection of deep
hazardous tunnels. Water Practice and Technology, 2018. 13(4): pp. 991-1000,
https://doi.org/10.2166/wpt.2018.105.
[8] A.D. Kulichenko, P.K. Shubin, Prospects for application of unmanned aerial vehicles for
solution of surveillance-and-search, search-and-rescue tasks at sea. Robotics and Technical
Cybernetics, 2017(1): pp. 45-50.
[9] G. Yue, Y.T. Pan, Intelligent inspection of marine disasters based on UAV intelligent vision.
Journal of Coastal Research, 2019: p. 410-416, https://doi.org/10.2112/si93-054.1.
[10] H. Freimuth, M. König, Planning and executing construction inspections with unmanned aerial
vehicles. Automation in Construction, 2018. 96: pp. 540-553, https://doi.org/
10.1016/j.autcon.2018.10.016.
[11] X. Peng, X.G. Zhong, A.H. Chen, C. Zhao, C.L. Liu, Y.F. Chen, Debonding defect
quantification method of building decoration layers via UAV-thermography and deep learning.
Smart Structures and Systems, 2021. 28(1): pp. 55-67,
https://doi.org/10.12989/sss.2021.28.1.055.
[12] E. William. An overview of the U. S. national building information model standard (NBIMS).
in International Workshop on Computing in Civil Engineering, 2007,
https://doi.org/10.1061/40937(261)8.
33
[13] D. Liu, J. Chen, D. Hu, Z. Zhang, Dynamic BIM-augmented UAV safety inspection for water
diversion project. Computers in Industry, 2019. 108: pp. 163-177,
https://doi.org/10.1016/j.compind.2019.03.004.
[14] R.R.S.D. Melo, D.B. Costa, J.S. álvares, J. Irizarry, Applicability of unmanned aerial system
(UAS) for safety inspection on construction sites. Safety Science, 2017. 98: pp. 174-185,
https://doi.org/10.1016/j.ssci.2017.06.008.
[15] N. Bolourian, A. Hammad, LiDAR-equipped UAV path planning considering potential
locations of defects for bridge inspection. Automation in Construction. 117 (2020) 103250,
https://doi.org/10.1016/j.autcon.2020.103250.
[16] E. Garilli, N. Bruno, F. Autelitano, R. Roncella, F. Giuliani, Automatic detection of stone
pavement's pattern based on UAV photogrammetry. Automation in Construction. 122 (2021)
103477, https://doi.org/10.1016/j.autcon.2020.103477.
[17] B.J. Perry, Y. Guo, R. Atadero, J.W. van de Lindt, Streamlined bridge inspection system
utilizing unmanned aerial vehicles (UAVs) and machine learning. Measurement. 164 (2020)
108048, https://doi.org/10.1016/j.measurement.2020.108048.
[18] K. Chen, G. Reichard, A. Akanmu, X. Xu, Geo-registering UAV-captured close-range images
to GIS-based spatial model for building façade inspections. Automation in Construction. 122
(2021) 103503, https://doi.org/ 10.1016/j.autcon.2020.103503.
[19] H. Huang, J. Long, H. Lin, L. Zhang, W. Yi, B. Lei, Unmanned aerial vehicle based remote
sensing method for monitoring a steep mountainous slope in the Three Gorges Reservoir, China.
Earth Science Informatics, 2017, https://doi.org/10.1007/s12145-017-0291-9.
[20] S. Granados-Bolaños, A. Quesada-Román, G.E. Alvarado, Low-cost UAV applications in
dynamic tropical volcanic landforms. Journal of Volcanology and Geothermal Research. 410
(2021) 107143, https://doi.org/10.1016/j.jvolgeores.2020.107143.
[21] A. Daryaei, H. Sohrabi, C. Atzberger, M. Immitzer, Fine-scale detection of vegetation in semi-
arid mountainous areas with focus on riparian landscapes using Sentinel-2 and UAV data.
Computers and Electronics in Agriculture. 177 (2020) 105686, https://doi.org/
10.1016/j.compag.2020.105686.
[22] D.I. Jones, An experimental power pick-up mechanism for an electrically driven UAV. in
Industrial Electronics, 2007. ISIE 2007. IEEE International Symposium, 2007,
https://doi.org/10.1109/ISIE.2007.4374920.
[23] Y. Tan, S. Li, H. Liu, P. Chen, Z. Zhou, Automatic inspection data collection of building surface
based on BIM and UAV. Automation in Construction. 131 (2021) 103881, https://doi.org/
10.1016/j.autcon.2021.103881.
[24] A. Banko, T. Bankovic, M. Pavasovic, A. Dapo, An All-in-One application for temporal
coordinate transformation in geodesy and geoinformatics. Isprs International Journal of Geo-
Information, 2020. 9(5), https://doi.org/10.3390/ijgi9050323.
[25] Li, Z.Z., R. Sato, K. Shirase, S. Sakamoto, Study on the influence of geometric errors in rotary
axes on cubic-machining test considering the workpiece coordinate system. Precision
Engineering-Journal of the International Societies for Precision Engineering and
Nanotechnology, 2021. 71: pp. 36-46, https://doi.org/10.1016/j.precisioneng.2021.02.011.
[26] H.T. Zhao, B. Zhang, C.S. Wu, Z.L. Zuo, Z.C. Chen, Development of a coordinate
transformation method for direct georeferencing in map projection frames. Isprs Journal of
34
Photogrammetry and Remote Sensing, 2013. 77: pp. 94-103,
https://doi.org/10.1016/j.isprsjprs.2012.12.004.
[27] P. Lin, G. Chang, J. Gao, Q. Wang, H. Bian, Helmert transformation with mixed geodetic and
Cartesian coordinates. Advances in Space Research, 2019. 63(9): pp. 2964-2971,
https://doi.org/10.1016/j.asr.2017.11.029.
[28] G.B. Chang, T.H. Xu, Q.X. Wang, Error analysis of the 3D similarity coordinate transformation.
GPS Solutions, 2017. 21(3): pp. 963-971, https://doi.org/10.1007/s10291-016-0585-2.
[29] H. Vermeille, Direct transformation from geocentric coordinates to geodetic coordinates.
Journal of Geodesy, 2002. 76(8): pp. 451-454, https://doi.org/10.1007/s00190-002-0273-6.
[30] W.E. Featherstone, S.J. Claessens, Closed-form transformation between geodetic and
ellipsoidal coordinates. Studia Geophysica et Geodaetica, 2008. 52(1): pp. 1-18,
https://doi.org/10.1007/s11200-008-0002-6.
[31] S.J. Claessens, Efficient transformation from Cartesian to geodetic coordinates. Computers &
Geosciences, 2019. 133, https://doi.org/10.1016/j.cageo.2019.104307.
[32] G.M. Diaz-Toca, L. Marin, I. Necula, Direct transformation from Cartesian into geodetic
coordinates on a triaxial ellipsoid. Computers & Geosciences, 2020. 142,
https://doi.org/10.1016/j.cageo.2020.104551.
[33] Y. Wu, J. Liu, H.Y. Ge, Comparison of total least squares and least squares for four- and seven-
parameter model coordinate transformation. Journal of Applied Geodesy, 2016. 10(4): pp. 259-
266, https://doi.org/10.1515/jag-2016-0015.
[34] W.Y. Yan, Y.G. Ke, Design of a wave shifter with the exit direction controllable based on
coordinate transformation theory, in Advances in Applied Science and Industrial Technology,
Pts 1 and 2, P. Xu, Y. Wang, Y. Su, and L. Hao, Editors. 2013. pp. 590-593,
https://doi.org/10.4028/www.scientific.net/AMR.798-799.590.
[35] D. Yang, J. Tang, F.P. Zeng, Blade imbalance fault diagnosis of doubly fed wind turbine based
on current coordinate transformation. Ieej Transactions on Electrical and Electronic
Engineering, 2019. 14(2): pp. 185-191, https://doi.org/10.1002/tee.22796.
[36] X. Gu, C. Liu, W. Guo, X. Yang, Coordinate transformation algorithm under arbitrary rotation
parameters and its application in high-speed railway measurement. Journal of Geomatics
Science and Technology, 2018. 35(5): pp. 451-456, https://doi.org/10.3969/j.issn.1673-
6338.2018.05.003.
[37] J.P. Cortés-Pérez, A. Cortés-Pérez, P. Prieto-Muriel, BIM-integrated management of
occupational hazards in building construction and maintenance. Automation in Construction.
113 (2020) 103115, https://doi.org/10.1016/j.autcon.2020.103115.
[38] T. Anh Nguyen, P. Thanh Nguyen, S. Tien Do, P. Thanh Phan, Application of building
information modelling (BIM) in managing the volume of high-rise building walls. Materials
Today: Proceedings, 2021, https://doi.org/10.1016/j.matpr.2020.11.371.
[39] M. Al-Kasasbeh, O. Abudayyeh, H. Liu, An integrated decision support system for building
asset management based on BIM and Work Breakdown Structure. Journal of Building
Engineering. 34 (2021) 101959, https://doi.org/10.1016/j.jobe.2020.101959.
[40] A. Pacios Álvarez, J. Ordieres-Meré, Á.P. Loreiro, L. de Marcos, Opportunities in airport
pavement management: Integration of BIM, the IoT and DLT. Journal of Air Transport
Management. 90 (2021) 101941, https://doi.org/10.1016/j.jairtraman.2020.101941.
35
[41] G.K. Mohamad Kassem, Nashwan Dawood, Michael Serginson, Steve Lockley, BIM in
facilities management applications: a case study of a large university complex. Built
Environment Project and Asset Management, 2015. 5(3), https://doi.org/10.1108/BEPAM-02-
2014-0011.
[42] J. Patacas, N. Dawood, M. Kassem, BIM for facilities management: A framework and a
common data environment using open standards. Automation in Construction. 120 (2020)
103366, https://doi.org/10.1016/j.autcon.2020.103366.
[43] M. Godinho, R. Machete, M. Ponte, A.P. Falcão, A.B. Gonçalves, R. Bento, BIM as a resource
in heritage management: An application for the National Palace of Sintra, Portugal. Journal of
Cultural Heritage, 2020. 43: pp. 153-162, https://doi.org/10.1016/j.culher.2019.11.010.
[44] D. Chapman, S. Providakis, C. Rogers, BIM for the Underground – An enabler of trenchless
construction. Underground Space, 2020. 5(4): pp. 354-361,
https://doi.org/10.1016/j.undsp.2019.08.001.
[45] J. Choi, F. Leite, D.P. de Oliveira, BIM-based benchmarking for healthcare construction
projects. Automation in Construction. 119 (2020) 103347,
https://doi.org/10.1016/j.autcon.2020.103347.
[46] O.S. Kwon, C.S. Park, C.R. Lim, A defect management system for reinforced concrete work
utilizing BIM, image-matching and augmented reality. Automation in Construction. 46 (2014):
pp. 74-81, https://doi.org/10.1016/j.autcon.2014.05.005.
[47] B. Wójcik, M. Arski, The measurements of surface defect area with an RGB-D camera for a
BIM-backed bridge inspection. Bulletin of the Polish Academy of Sciences, Technical Sciences,
2021. 69(3): pp. 1-9, https://doi.org/10.24425/bpasts.2021.137123.
[48] Z. Xu, R. Kang, R. Lu, 3D Reconstruction and measurement of surface defects in prefabricated
elements using point clouds. Journal of Computing in Civil Engineering, 2020. 34(5): pp.
04020033-1-17, https://doi.org/10.1061/(ASCE)CP.1943-5487.0000920.
[49] D. Ribeiro, R. Santos, A. Shibasaki, P. Montenegro, H. Carvalho, R. Calçada, Remote
inspection of RC structures using unmanned aerial vehicles and heuristic image processing.
Engineering Failure Analysis. 117 (2020) 104813,
https://doi.org/10.1016/j.engfailanal.2020.104813.
[50] R.F. Wong, C.M. Rollins, C.F. Minter, Ion. Recent updates to the WGS 84 reference frame. in
25th International Technical Meeting of the Satellite-Division of the Institute-of-Navigation,
2012. Nashville, TN.
[51] Y. Tan, R. Cai, J. Li, P. Chen, M. Wang, Automatic detection of sewer defects based on
improved you only look once algorithm. Automation in Construction. 131 (2021) 103912,
https://doi.org/10.1016/j.autcon.2021.103912.
[52] M.Z. Wang, H. Luo, J.C.P. Cheng, Towards an automated condition assessment framework of
underground sewer pipes based on closed-circuit television (CCTV) images. Tunnelling and
Underground Space Technology, 2021. 110, https://doi.org/10.1016/ j.tust.2021.103840.
[53] J.C.P. Cheng, M.Z. Wang, Automated detection of sewer pipe defects in closed-circuit
television images using deep learning techniques. Automation in Construction, 2018. 95: pp.
155-171, https://doi.org/10.1016/j.autcon.2018.08.006.
[54] Wang, M.Z. and J.C.P. Cheng, A unified convolutional neural network integrated with
conditional random field for pipe defect segmentation. Computer-Aided Civil and
Infrastructure Engineering, 2020. 35(2): pp. 162-177, https://doi.org/10.1111/mice.12481.
36
[55] K.M. He, G. Gkioxari, P. Dollar, R. Girshick, Mask R-CNN. Ieee Transactions on Pattern
Analysis and Machine Intelligence, 2020. 42(2): pp. 386-397,
https://doi.org/10.1109/tpami.2018.2844175.
[56] S. Ren, K. He, R. Girshick, J. Sun, Faster R-CNN: Towards real-time object detection with
region proposal networks. IEEE Transactions on Pattern Analysis & Machine Intelligence,
2017. 39(6): pp. 1137-1149, https://doi.org/10.1109/TPAMI.2016.2577031.
... This is understandable since flying drones is a task subject to local regulations, and it could be mandatory around civil structures. However, works such as [5,6] and [9][10][11] propose autonomous and semi-autonomous navigation methods based on previously extracted geometric constraints, i.e. BIM models. ...
... Most of these generated data by the drones are processed using commercial software as shows Table 1. But, specific data processing can also found, for instance in [6] authors propose the FHCE method to detect imperfections in civil structures using RGB cameras; in [8] authors use OpenCV and GIS tools to generate georeferenced image mosaics; in [9] authors detect cracks in façades using deep learning methods, and map them into BIM models to guarantee traceability of the building information model; and, in [10]an inspection concept for UAVs must be integrated and automated. This work is the result of an ongoing effort to create a workflow for the structured planning, simulation and execution of inspection tasks. ...
Article
Full-text available
In contemporary times, monitoring building façades is a task that require extensive inspections in wide areas of work, working at different heights and in diverse weather situations. To address this challenge, this work proposes the development of a software tool called DroneFaçade to manage the drone flight, generate an image mosaic of a building façade and providing the corresponding report of the inspection mission. This allows continuous monitoring and registering the state of the façade through time. DroneFaçade was developed in four modules namely configuration, execution, data processing and report generation modules. The operational pipeline of DronFaçade includes three phases: first, the configuration module of DroneFaçade is used to introduce the way points, fulfill other information about the mission, trajectory planning and sensors to be used; second, the execution module of DroneFaçade connects to the drone, then the flight mission starts and also the data acquisition; finally, the data processing and report modules of DroneFaçade can compute the mosaic using the image previously stored in a SQL database, and users can generate a PDF report of the mission. Quantitative tests were performed to measure the image mosaic accuracy of DroneFaçade. As a result, the mosaic image computed by DronFaçade has low structural errors, preserves phase congruency and the magnitude of the image gradients, keeps high correlation and an appearance similarity of up 92.6% with respect to the real scene.
... The text refers to the differences between a BIM model and its digital twin [1]. Furthermore, the article deals with the processing of the obtained data and the subsequent modelling of the 3D model in the BIM format similar to [2] or [3]. ...
... This paper compare two data collection methods and found which one was suitable for such a rugged object. In other articles [2], [3], such a specific object was not found for subsequent processing in the BIM format. Due to this, some difficulties in the orientation and subsequent processing of the model were found. ...
... Analyzing existing research shows that the defect inspection quality and the alignment accuracy of different coordinate systems [38,39] are vital to generating defect-extended BIMs. We have also researched these two key areas in Section 3. ...
Article
Full-text available
Defect inspection of existing buildings is receiving increasing attention for digitalization transfer in the construction industry. The development of drone technology and artificial intelligence has provided powerful tools for defect inspection of buildings. However, integrating defect inspection information detected from UAV images into semantically rich building information mod-eling (BIM) is still challenging work due to the low defect detection accuracy and the coordinate difference between UAV images and BIM models. In this paper, a deep learning-based method coupled with transfer learning is used to detect defects accurately; and a texture mapping-based defect parameter extraction method is proposed to achieve the mapping from the image U-V coordinate system to the BIM project coordinate system. The defects are projected onto the surface of the BIM model to enrich a surface defect-extended BIM (SDE-BIM). The proposed method was validated in a defect information modeling experiment involving the No. 36 teaching building of Nantong University. The results demonstrate that the methods are widely applicable to various building inspection tasks.
... Chow et al. [4] presented an automated system for detecting cracks and spalling in buildings using mobile data collection, deep learning, and scene reconstruction. Tan et al. [5] developed a method for integrating crack data from unmanned aerial vehicle images into building information models, thereby improving the inspection of high-rise building façades. In addition, several studies have been conducted to monitor various defects in buildings, such as leakage and heat loss [6,7,8]. ...
... Localization refers to the estimation of robot position and orientation, while mapping denotes the creation of a digital representation of the robot environment, both are fundamental processes for robot navigation [18]. Robots utilize sensors such as RGB-D cameras, stereo cameras and Light Detection and Ranging (LiDAR) to gather the environment data for localization and mapping [7]. ...
... Acrescentando a isso, vários artigos são dedicados ao uso da fotogrametria com drone na construção. Alguns estudos, por exemplo, focaram na investigação do uso da fotogrametria oblíqua para modelar cenários urbanos complexos (Toschi et al., 2017), na identificação de problemas que surgem na construção de usinas nucleares (Kiriiak, 2021), na reconstrução de monumentos históricos (Hu et al., 2021;Pepe et al., 2022), no monitoramento da integridade estrutural de barragens e do deslocamento de edificações (Sun et al., 2022;Zhao et al., 2021), na inspeção de paredes externas de prédios (Tan et al., 2022), na avaliação da infraestrutura de pontes (MANDIROLA et al., 2022), na detecção e classificação de danos em estradas de asfalto afetadas por deslizamentos de terra (Nappo et al., 2021), e na gestão inteligente de resíduos da construção provenientes da demolição de edifícios (HU et al., 2022). ...
Article
Full-text available
Projetos de construção são sistemas altamente complexos que apresentam recorrentemente discrepâncias entre o trabalho planejado e o trabalho realizado. Tecnologias digitais, como a fotogrametria com VANTs (drones), são ferramentas de suporte promissoras nesse contexto. Assim, este estudo tem como objetivo avaliar os procedimentos de desenvolvimento e qualidade de um modelo 3D gerado por fotogrametria através de imagens capturadas por drone, comparando-o ao modelo as-designed BIM. Para isso, realizou-se um estudo de caso em uma habitação de interesse social localizada em Camaçari-BA. Comparado ao modelo BIM projetado, o modelo fotogramétrico apresentou um desvio dimensional médio de -1,68%. O modelo também apresentou inconsistências como oclusões e deformações. O estudo mostra que tanto esse desvio como a qualidade do modelo fotogramétrico obtido podem ser consideravelmente influenciados pela forma de coleta dos dados (ex. baixa quantidade e resolução das fotos). A principal contribuição do estudo é apresentar o potencial do uso do VANT para captura de imagens para geração de um modelo de fotogrametria.
Article
Full-text available
Bridge inspections are a vital part of bridge maintenance and the main information source for Bridge Management Systems is used in decision-making regarding repairs. Without a doubt, both can benefit from the implementation of the Building Information Modelling philosophy. To fully harness the BIM potential in this area, we have to develop tools that will provide inspection accurate information easily and fast. In this paper, we present an example of how such a tool can utilise tablets coupled with the latest generation RGB-D cameras for data acquisition; how these data can be processed to extract the defect surface area and create a 3D representation, and finally embed this information into the BIM model. Additionally, the study of depth sensor accuracy is presented along with surface area accuracy tests and an exemplary inspection of a bridge pillar column.
Article
Full-text available
Pavement management system (PMS) is a set of tools that assist road agencies in finding optimal strategies for maintaining pavements in a serviceable condition over a period of time. Usually, municipalities base their PMS on the deterioration monitoring through a visual survey but the distresses identification is complex and the operations are based on visual and instrumental inspections. As regards natural stone pavements, which are very widespread in the road heritage of cities, in literature there are very few studies. The authors analyzed two supervised classification approaches (Semi-Automatic Classification Plugin for QGIS and a Convolutional Neural Network (CNN)), based on Unmanned Aerial Vehicle (UAV) photogrammetry, to detect stone pavement's pattern. This study showed that using a U-Net CNN on images obtained from UAV is an excellent alternative to the traditional manual inspection and can be implemented for other types of stone pavements, also with the aim of distress identification.
Article
Full-text available
This paper aims to explore how digitalization technologies can help to better control and supervise road pavement and refurbishments at airports. In particular, Building Information Modeling, but also the Internet of Things and Distributed Ledger Technologies, have been considered. Particular attention has been devoted to the efficiency and effectiveness of management actions involving the control and supervision of maintenance and rehabilitation of runway pavement. To conduct the analysis, a use case, related to the Madrid Barajas 18R-36L runway, has been considered. Based on this case, various improvements have been detected and an operational framework has been proposed, which allows a higher level of functionality. Finally, a discussion about the capabilities and challenges for the airport industry is undertaken.
Article
The drainage system is an important part of civil infrastructure. However, the underground sewage pipe will gradually suffer from defects over time, such as tree roots, deposits, infiltrations and cracks, which heavily affect the performance of sewage pipes. Therefore, it is significant to timely inspect the condition of sewage pipes. Closed-circuit television (CCTV) inspection is a commonly employed underground infrastructure inspection technology requiring engineering experience that can be subjective and inefficient. Nowadays, object detection based on convolutional neural network (CNN) can automatically detect defects, showing high potential for improving inspection efficiency. This paper proposed an improved CNN-based You Only Look Once version 3 (YOLOv3) method for automatic detection of sewage pipe defects, where the improvements are mainly involved in loss function, data augmentation, bounding box prediction and network structure. Experiment results demonstrate that the improved model outperforms Faster R-CNN and YOLOv3, achieving a mean average precision (mAP) value of 92%, which is higher than the existing research on automatic detection of sewage pipe defects.
Article
Conventional high-rise building surface inspection is usually inefficient and requires the inspectors to work at heights with high risk. Unmanned aerial vehicles (UAVs) carrying optical or thermal cameras are currently widely utilized as an effective tool for inspection. The UAV-based data collection, especially for unreachable inspection areas, is the basis of unmanned inspection of building surface. In addition, building information modeling (BIM) with rich geometric and semantic information can also be instrumental in building surface inspection. Therefore, this paper presented an automatic inspection method of building surface, especially for the inspection data collection, by integrating UAV and BIM. To minimize the length of UAV flight while collecting complete and high-quality image data considering the limited endurance capability, the coverage path planning problem is solved using genetic algorithm (GA). The required inspection areas are obtained from the BIM model of the target building to be inspected. To further enhance the automation of building surface inspection, the optimized UAV flight mission parameters are rapidly calculated based on the BIM model and proposed algorithm. A real office building in Shenzhen University campus is used to validate the presented automatic method. The quality of the collected inspection images using the UAV with optimized flight mission are evaluated. The results show that this method leads to time-efficient, accurate, and high-quality inspection data collection for building surface.
Article
The falling offs of building decorative layers (BDLs) on exterior walls are quite common, especially in Asia, which presents great concerns to human safety and properties. Presently, there is no effective technique to detect the debonding of the exterior finish because debonding are hidden defect. In this study, the debonding defect identification method of building decoration layers via UAV-thermography and deep learning is proposed. Firstly, the temperature field characteristics of debonding defects are tested and analyzed, showing that it is feasible to identify the debonding of BDLs based on UAV. Then, a debonding defect recognition and quantification method combining CenterNet (Point Network) and fuzzy clustering is proposed. Further, the actual area of debonding defect is quantified through the optical imaging principle using the real-time measured distance. Finally, a case study of the old teaching-building inspection is carried out to demonstrate the effectiveness of the proposed method, showing that the proposed model performs well with an accuracy above 90%, which is valuable to the society.
Article
Evaluating the influence of geometric errors in rotary axes is a common method used by a five-axis machine tool for improving the machining accuracy. In conventional geometric error models, the table coordinate system is considered as the final workpiece coordinate system. In this study, an additional workpiece coordinate transformation was proposed to identify the influence of geometric error. First, a cubic machining test was conducted. Second, the necessity of workpiece coordinate transformation was analyzed, and a method for coordinate transformation was proposed. In addition, both machining simulation and an actual machining experiment of the cubic machining test were conducted to verify the efficiency of the proposed method. The results indicate that the workpiece coordinate transformation is an essential part of the geometric error model for accurately simulating the geometric error influence. The method for identifying the geometric error influence considering the workpiece coordinate transformation is applicable in manufacturing.
Article
There is a growing trend of using computer vision techniques for interpreting Closed-Circuit Television (CCTV) inspection videos of sewer pipes. Previous studies mainly focus on detecting defect types and locations in CCTV images, yet limited systematic approaches are available for automatically evaluating defect severity and sewer condition with references to existing standards. This study proposes a framework for evaluating defect severity and sewer condition automatically from CCTV images using computer vision methods, which includes (1) required information definition for sewer condition assessment, (2) pipe joint detection and fitting by image processing techniques to obtain cross section area, (3) defect detection and segmentation to obtain defect type, location and area, and (4) evaluation of defect severity and sewer condition. Particularly, three deep learning -based defect detection models are developed, among which the model based on Faster R-CNN (regional convolution neural network) outperforms others with higher accuracy and is used for detecting defect type and location in the image. Meanwhile, an innovative semantic segmentation model is applied for segmenting defects to extract defect area in the image. In the validation, our framework performs well in defect detection with an average precision, recall and F1 of 88.99%, 87.96%, and 88.21% respectively. More importantly, the framework evaluates Operation and Maintenance (O&M) defects more accurately by precise calculation and generates the overall condition gradings that are mostly consistent with professional inspectors, only with an average deviation of 3.06%. Our framework can assist the review of inspection videos and lays the basis for fully automated sewer assessment and maintenance planning in the future. Without constraints on the assessment codes and computer vision methods, the framework is adaptable to evaluating sewer condition in different regions and can achieve better performance by integrating with cutting-edge vision techniques.
Article
The recent and growing development and availability of unmanned aerial vehicles/systems (UAV, UAS, or “drones”) in volcanology has promoted a significant advance in volcanic surveillance of active volcanoes and in the characterization of volcanic landforms and hazards. However, in the tropics with heavy rainfall, deep volcanic soils and high relief, UAV surveying for volcanic geomorphology and volcanic hazards seems to be a relatively unexplored technique. Our aim is to present and promote innovative low-cost (<$3000) UAV applications in volcanology to reduce costs and improve high-resolution quality (up to 8 cm/pixel) data acquisition in highly dynamic landscapes. Our results contribute to the state of the art of UAV applications in volcanic landforms in tropical developing countries where nearly half of the globally active volcanoes are located. Our findings prove that UAV's are a low-cost technique that can map large extensions of geomorphological features with accessibility limitations due to geological hazards and/or private property restrictions in short time. We surveyed four active volcanic sites in Costa Rica, Central America to illustrate potential applications of UAV mapping and geomorphological analysis of lava flows, debris avalanches, lahar deposits (debris flows) and biogeomorphic landscape changes due to forest succession. Analysis derived from the digital imagery captured by the UAV allowed to determine accurate volume calculations, surface roughness characteristics, morphometric quantifications, supervised surface classifications, and in combination with hydraulic modelling to assess hazards in urban planning. We discuss the utility, limitations, and future directions of low-cost UAV surveying in the geomorphological and geological analysis of tropical volcanic landforms and processes.