Content uploaded by Tao Liu
Author content
All content in this area was uploaded by Tao Liu on Aug 05, 2021
Content may be subject to copyright.
A FULLY AUTOMATIC METHOD FOR RAPIDLY MAPPING IMPACTED AREA BY
NATURAL DISASTER
Tao Liu(liut1@ornl.gov), Lexie Yang(yangh@ornl.gov)
Geographic Data Science Group
National Security Emerging Technologies Division
Oak Ridge National Laboratory
ABSTRACT
Deep learning based change detection methods have achieved
the state-of-the-art performance in several recent studies. How-
ever, such methods usually are supervised, and therefore a
large number of training samples is often a requisite. Manu-
ally preparing those training samples is not only expensive but
also time-consuming, which does not fit the need of rapidly
mapping the impacted area caused by nature disaster for fur-
ther rescue mission and damage assessment. In this study, a
fully automatic method was proposed to address the issue by
automating training sample generation for mapping the im-
pacted area caused by nature disaster. We used the 2011 tor-
nado event in Joplin, Missouri, US, as an example of its appli-
cation. The generated impacted area map was both visually
and quantitatively evaluated against the ground truth data col-
lected by US Federal Emergency Management Agency (FEMA).
The results show that the map matches well with the FEMA
ground truth data with 86% of major-damaged and destroyed
buildings identified by FEMA on the ground also detected
by this fully automatic framework using very high resolution
(VHR) satellite images.
Index Terms—change detection, disaster assessment, deep
learning, OBIA, SIFT, RANSAC
1. INTRODUCTION
Extreme weather events have happened in a more frequent
basis in recent years in the US, according to the U.S. Cli-
mate Extremes Index(CEI), which is used to track the extreme
weather events from the year 1910 to the latest year 2019[1].
On the other hand, remote sensing images are becoming in-
creasingly available recently with more sensors having been
deployed on spaceborne and airborne remote sensing plat-
forms. This motivates us to develop remote sensing-based
method to monitor the landscape change caused by extreme
weather events. The most relevant category of techniques
to achieve that purpose is the change detection technology,
which aims to map the change of land cover feature of in-
terest given the remote sensing images collected from two
time steps (i.e., pre-event and post-event images). However,
achieving scalable and robust change detection is still an un-
solved problem with challenges stemming from various sources
such as spectral heterogeneity in space and time, the rarity of
land-cover changes, the presence of data at multiple scales
and multiple sources, the misregistration between temporal
images and the paucity of training data [2]. Recently, deep
learning techniques have been utilized in several studies to
develop novel change detection technologies and have shown
superior performance compared with existing methods [3, 4,
5]. The architecture of deep learning model developed for
change detection usually contains two separate but identical
convolutional neural network (CNN) branches with one re-
sponsible to extract features from pre-event image and the
other from post-event image. The features corresponding to
pre- and post-event images are then fused to form one sin-
gle feature vector using operations such as feature subtrac-
tion [4], concatenation [5, 6], and Long short-term memory
(LSTM) [3]. Finally the fused feature is input into a classifier
to derive the change type. The model parameters of feature
extraction components and classifier can be trained together
in a supervised training procedure using ground truth change
samples.
Even though those supervised change detection approaches
have shown superior performance, one of the requirements for
favorable performance is the access to sufficient training sam-
ples. Preparing a relatively large number of training samples
is laborious and time-consuming, which does not satisfy the
needs of rapidly mapping the impacted area caused by nature
disaster. In this study, we proposed a method to automate
the training sample generation procedure. With this proposed
method, we are able to present a novel workflow that auto-
mate all the steps for change detection including training sam-
ple extraction, model training, change map generation, aim-
ing to help humanitarian organizations and government agen-
cies such as FEMA respond quickly to nature disaster, rapidly
assess the damage and deliver the assistance in time to regions
where restoration is urgently needed.
6906978-1-7281-6374-1/20/$31.00 ©2020 IEEE IGARSS 2020
IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium | 978-1-7281-6374-1/20/$31.00 ©2020 IEEE | DOI: 10.1109/IGARSS39084.2020.9323634
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
(a) Pre-tornado image
(b) Post-tornado image
Fig. 1. WolrdView satellite images for pre- and post-tornado
event
2. DATA SETS AND EXPERIMENT SETTINGS
We selected 2011 Joplin tornado event as our case study. For
this tornado event, we have selected one pre-event World-
View-2 image and one post-event WorldView-2 image to map
the impact area under the change detection concept. The pre-
event image was collected on on August 8th, 2009 and the
post-event image covering the Joplin is May 29th, 2011, seven
days after the tornado. FEMA maintains a historical damage
assessment database [7] where the damage category assigned
to the structure based on modeled or visual assessment. There
are four categories to describe damages to structures, and cor-
responding assessed structures are listed here for this tornado
event: Destroyed : 2538, Major : 1781, Minor : 1772, and Af-
fected : 2349. In Figure 2, we can see that the main impacted
area, where the destroyed (yellow dots) and major damaged
(blue dots) structures are mainly on the tornado path. Ac-
cording to the Damage Assessment Operations Manual [8], it
is challenging even for human to observe minor damages with
60 cm resolution imagery, such as damages to the roofing ma-
terial or cracks on the exterior walls. Therefore, we selected
data points that are categorized as Major and Destroyed in
this database as our ground truth.
Fig. 2. FEMA historical damage assessment database for
2011 Joplin tornado event.
3. METHODS
Fig. 3. The proposed workflow for fully automatic map-
ping of impacted area caused by nature disaster. The train-
ing samples are automatically generated from a)unchanged
vegetation area, b)unchanged sampling area generated by un-
changed keypoints, and c)changed sampling area
Figure 3 presents the key steps for the whole procedure
of our proposed methods. It starts with cropping the given
pre- and post-event remote sensing images to make sure they
share the same extent, which is necessary for change detec-
tion. Based on the cropped pre- and post-event images, sam-
pling areas for extracting unchanged training samples (Fig-
ure 3a and b) and changed training samples(Figure 3c) were
generated. The sampling area for extracting the unchanged
training samples consists of one (Figure 3a) that was based
on vegetation extraction, and another one (Figure 3b) that re-
lied on the scale-invariant feature transform (SIFT) extraction
and filtering. The sampling area used to obtain changed train-
ing samples (Figure 3c) was derived from density analysis
of unchanged keypoints. With the identified sampling areas,
training samples were collected and were used to train a su-
pervised change detection model. Finally, the trained model
was applied to the whole cropped image scenes to create the
6907
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
map of the impacted area. For the remaining part of this sec-
tion, details regarding key components will be provided.
3.1. Unchanged vegetation sampling area generation
In this study, we simply treat the area as unchanged area if this
area is covered by vegetation in both pre- and post-event im-
ages. The vegetation masks for the pre- and post-event images
were generated by applying a threshold to the NDVI value. A
threshold of 0.3 was adopted in this study, since the vegeta-
tion mask generated with this threshold covers most part of
the vegetation in the study area without introducing excessive
false positives. The pre-event vegetation mask was overlaid
on the post-event vegetation mask to generate the overlapping
vegetation mask by identifying the pixels that are located in
both pre- and post-event vegetation masks.
3.2. Unchanged keypoint generation
In this study, the common procedure of using SIFT to ex-
tract unchanged keypoints was implemented, which includes
SIFT keypoint extraction, keypoint matching with Lowe’s ra-
tio test [9], keypoints filtering by random sample consensus
(RANSAC). There are 6195 unchanged keypoints identified
in the study area, which is shown in Figure4 using the post-
event image as the background. Our visual inspection indi-
cates the majority of the unchanged points were generated in
the area free from the impact of the tornado, with very few
of them falling into the impacted area. Those small amount
of undesired keypoints are not a problem, since our workflow
will remove them later on.
Fig. 4. Unchanged keypoints generated by SIFT and filtered
with RANSAC algorithm
3.3. Changed sampling area generation
Based on the observation that the unchanged keypoints in the
impacted area is substantially sparse than the keypoints in
the unaffected area, we developed a density-based algorithm
to generate sampling the area that was used to extract the
changed training samples. To that end, a point density map
was generated by finding the total number of points within a
radius of 500 meter for each pixel in the study area. We used
500 meters because of the assumption that the tornado can
affect land features that are as far as 500 meters away from
its center line of track with the average tornado as 150 m ra-
dius [10]. The density map underlying the unchanged key-
points is shown in Figure 5. With the density map, a thresh-
old 15 was used to generate a binary mask as the preliminary
changed sampling area showing the pixel locations with un-
changed keypoint density less than 15. Due to the volatility
of shape of vegetation resulting from growth, wind, and sea-
soning change, unchanged point tends to become sparse in
vegetation area. In addition, the edge effect also decreases
the density of points for area near to the boundary of study
area. To remove the false changed sampling area from pre-
liminary changed sampling area incurred by those two fac-
tors, the union of masks was created that contains the over-
lapping vegetation mask generated in section 3.1, and inward
500-meter wide buffering mask. The resulted mask was used
to mask out the false changed sampling area to finalize the
changed sampling area.
3.4. Unchanged sampling area generation
The unchanged vegetation sampling area generated from sec-
tion 3.1 was further refined by removing the pixels that fall
into the changed sampling area. The keypoints derived from
section 3.2 was further narrowed by removing the points that
are found in the changed sampling area or within the 500 me-
ter wide outward buffering area of changed sampling area.
Then, for each of the remaining unchanged points, a circle
with 50 meter as radius was generated to expand the sampling
area defined by unchanged keypoints. This is based on the as-
sumption that land features within 50 meters of unchanged
keypoints are neither affected by tornado. The final sampling
area for extracting the changed and unchanged training sam-
ple is shown in Figure 6.
Fig. 5. Unchanged keypoints overlaid onto point density map.
The brighter the density map, the higher the point density
Fig. 6. Sampling area (red color represents changed sampling
area, purple color corresponds to unchanged sampling area
generated by unchanged keypoints, and green color shows the
unchanged vegetation area. )
6908
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
3.5. Change map generation
In this study, change detection model proposed in [5] was
adopted. To make the model robust to image misalignment
and computationally efficient for change detection, the author
proposed to adopt the image objects as analysis unit instead
of pixel that is used by most of existing methods. The objects
were generated by segmenting each of the 40 pre-event image
patches (size for each is 2000x2000) using an off-shelf image
segmentation algorithm Quickshift.
The generated objects were overlaid onto the sampling
area image (Figure 6) to extract the object-based training sam-
ples. While all the samples generated from Figure 3b and c
were used in training, only 50% of randomly selected samples
from Figure 3a were used for training in order to mitigate im-
balanced training samples as well as shorten the training time.
After the training was finished, the trained model was applied
to all the objects within the area to detect changes after the tor-
nado. All the components in Figure3 that were conducted on
image patch of 2000x2000 pixels were implemented in paral-
lel for 40 image patches covering the whole study area. Start-
ing from image scenes and ended with change map, the whole
procedure was able to run without the need of any human in-
tervention, and took 275 minutes in total to finish.
4. RESULTS AND CONCLUSION
Figure 7 presents the map of the impacted area automatically
generated by our proposed method. The visual evaluation in-
dicates the map matches well with the moving trace of the
tornado in Joplin shown in the post-event image. To quantita-
tively evaluate the quality of the map, ground truth data col-
lected by FEMA is overlaid on top of the map. As mentioned
in section 2, only the major damaged and destroyed buildings
identified by FEMA were used in the quantitative evaluation.
Figure 8 shows our map aligns well with the FEMA ground
truth data with 3722 out of 4319 assessed structures that are
declared as destroyed and damaged covered in the detected
impacted area. We have shown the proposed framework can
Fig. 7. Map of Impacted Area
successfully derive the impacted area without labelling efforts
while deliver favorable results for the tornado case study. In
the future, we will test our workflow on the areas impacted by
other types of nature disaster, such as hurricane, earthquake,
Fig. 8. Impacted area overlaid with FEMA ground truth data
fire and flooding to show the generalization capability of this
work.
5. ACKNOWLEDGEMENTS
This manuscript has been authored by UT-Battelle, LLC, under con-
tract DE-AC05-00OR22725 with the US Department of Energy (DOE).
The US government retains and the publisher, by accepting the arti-
cle for publication, acknowledges that the US government retains a
nonexclusive, paid-up, irrevocable, worldwide license to publish or
re-produce the published form of this manuscript, or allow others to
do so, for US government purposes. DOE will provide public access
to these results of federally sponsored research in accordance with
the DOE Public Access Plan(http://energy.gov/download-s/doe-public-
access-plan).
6. REFERENCES
[1] NOAA, “U.S. Climate Extremes Index (CEI): Graph,”
http:https://www.ncdc.noaa.gov/extremes/cei/
graph/us/cei/01-12, 2020, [Online; accessed 10-January-
2020].
[2] Anuj Karpatne, Zhe Jiang, Ranga Raju Vatsavai, Shashi Shekhar, and
Vipin Kumar, “Monitoring land-cover changes: A machine-learning
perspective,” IEEE Geoscience and Remote Sensing Magazine, vol. 4,
no. 2, pp. 8–21, 2016.
[3] Lichao Mou, Lorenzo Bruzzone, and Xiao Xiang Zhu, “Learning
spectral-spatial-temporal features via a recurrent convolutional neural
network for change detection in multispectral imagery,” IEEE Trans-
actions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 924–935,
2018.
[4] Wuxia Zhang and Xiaoqiang Lu, “The spectral-spatial joint learning
for change detection in multispectral imagery,” Remote Sensing, vol.
11, no. 3, pp. 240, 2019.
[5] Tao Liu, Lexie Yang, and Dalton D Lunga, “Towards misregistration-
tolerant change detection using deep learning techniques with object-
based image analysis,” in Proceedings of the 27th ACM SIGSPATIAL
International Conference on Advances in Geographic Information Sys-
tems, 2019, pp. 420–423.
[6] Bo Peng, Xinyi Liu, Zonglin Meng, and Qunying Huang, “Urban flood
mapping with residual patch similarity learning,” in Proceedings of the
3rd ACM SIGSPATIAL International Workshop on AI for Geographic
Knowledge Discovery, 2019, pp. 40–47.
[7] FEMA, Historical Damage Assessment Database, 2019 (accessed Jan-
uary 3, 2020).
[8] FEMA, Damage Assessment Operations Manual, 2016 (accessed Jan-
uary 3, 2020).
[9] David G Lowe, “Distinctive image features from scale-invariant key-
points,” International journal of computer vision, vol. 60, no. 2, pp.
91–110, 2004.
[10] Kevin Hile, The handy Weather answer book.
6909
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.