Conference PaperPDF Available

A Fully Automatic Method for Rapidly Mapping Impacted Area by Natural Disaster

Authors:

Figures

Content may be subject to copyright.
A FULLY AUTOMATIC METHOD FOR RAPIDLY MAPPING IMPACTED AREA BY
NATURAL DISASTER
Tao Liu(liut1@ornl.gov), Lexie Yang(yangh@ornl.gov)
Geographic Data Science Group
National Security Emerging Technologies Division
Oak Ridge National Laboratory
ABSTRACT
Deep learning based change detection methods have achieved
the state-of-the-art performance in several recent studies. How-
ever, such methods usually are supervised, and therefore a
large number of training samples is often a requisite. Manu-
ally preparing those training samples is not only expensive but
also time-consuming, which does not fit the need of rapidly
mapping the impacted area caused by nature disaster for fur-
ther rescue mission and damage assessment. In this study, a
fully automatic method was proposed to address the issue by
automating training sample generation for mapping the im-
pacted area caused by nature disaster. We used the 2011 tor-
nado event in Joplin, Missouri, US, as an example of its appli-
cation. The generated impacted area map was both visually
and quantitatively evaluated against the ground truth data col-
lected by US Federal Emergency Management Agency (FEMA).
The results show that the map matches well with the FEMA
ground truth data with 86% of major-damaged and destroyed
buildings identified by FEMA on the ground also detected
by this fully automatic framework using very high resolution
(VHR) satellite images.
Index Termschange detection, disaster assessment, deep
learning, OBIA, SIFT, RANSAC
1. INTRODUCTION
Extreme weather events have happened in a more frequent
basis in recent years in the US, according to the U.S. Cli-
mate Extremes Index(CEI), which is used to track the extreme
weather events from the year 1910 to the latest year 2019[1].
On the other hand, remote sensing images are becoming in-
creasingly available recently with more sensors having been
deployed on spaceborne and airborne remote sensing plat-
forms. This motivates us to develop remote sensing-based
method to monitor the landscape change caused by extreme
weather events. The most relevant category of techniques
to achieve that purpose is the change detection technology,
which aims to map the change of land cover feature of in-
terest given the remote sensing images collected from two
time steps (i.e., pre-event and post-event images). However,
achieving scalable and robust change detection is still an un-
solved problem with challenges stemming from various sources
such as spectral heterogeneity in space and time, the rarity of
land-cover changes, the presence of data at multiple scales
and multiple sources, the misregistration between temporal
images and the paucity of training data [2]. Recently, deep
learning techniques have been utilized in several studies to
develop novel change detection technologies and have shown
superior performance compared with existing methods [3, 4,
5]. The architecture of deep learning model developed for
change detection usually contains two separate but identical
convolutional neural network (CNN) branches with one re-
sponsible to extract features from pre-event image and the
other from post-event image. The features corresponding to
pre- and post-event images are then fused to form one sin-
gle feature vector using operations such as feature subtrac-
tion [4], concatenation [5, 6], and Long short-term memory
(LSTM) [3]. Finally the fused feature is input into a classifier
to derive the change type. The model parameters of feature
extraction components and classifier can be trained together
in a supervised training procedure using ground truth change
samples.
Even though those supervised change detection approaches
have shown superior performance, one of the requirements for
favorable performance is the access to sufficient training sam-
ples. Preparing a relatively large number of training samples
is laborious and time-consuming, which does not satisfy the
needs of rapidly mapping the impacted area caused by nature
disaster. In this study, we proposed a method to automate
the training sample generation procedure. With this proposed
method, we are able to present a novel workflow that auto-
mate all the steps for change detection including training sam-
ple extraction, model training, change map generation, aim-
ing to help humanitarian organizations and government agen-
cies such as FEMA respond quickly to nature disaster, rapidly
assess the damage and deliver the assistance in time to regions
where restoration is urgently needed.
6906978-1-7281-6374-1/20/$31.00 ©2020 IEEE IGARSS 2020
IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium | 978-1-7281-6374-1/20/$31.00 ©2020 IEEE | DOI: 10.1109/IGARSS39084.2020.9323634
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
(a) Pre-tornado image
(b) Post-tornado image
Fig. 1. WolrdView satellite images for pre- and post-tornado
event
2. DATA SETS AND EXPERIMENT SETTINGS
We selected 2011 Joplin tornado event as our case study. For
this tornado event, we have selected one pre-event World-
View-2 image and one post-event WorldView-2 image to map
the impact area under the change detection concept. The pre-
event image was collected on on August 8th, 2009 and the
post-event image covering the Joplin is May 29th, 2011, seven
days after the tornado. FEMA maintains a historical damage
assessment database [7] where the damage category assigned
to the structure based on modeled or visual assessment. There
are four categories to describe damages to structures, and cor-
responding assessed structures are listed here for this tornado
event: Destroyed : 2538, Major : 1781, Minor : 1772, and Af-
fected : 2349. In Figure 2, we can see that the main impacted
area, where the destroyed (yellow dots) and major damaged
(blue dots) structures are mainly on the tornado path. Ac-
cording to the Damage Assessment Operations Manual [8], it
is challenging even for human to observe minor damages with
60 cm resolution imagery, such as damages to the roofing ma-
terial or cracks on the exterior walls. Therefore, we selected
data points that are categorized as Major and Destroyed in
this database as our ground truth.
Fig. 2. FEMA historical damage assessment database for
2011 Joplin tornado event.
3. METHODS
Fig. 3. The proposed workflow for fully automatic map-
ping of impacted area caused by nature disaster. The train-
ing samples are automatically generated from a)unchanged
vegetation area, b)unchanged sampling area generated by un-
changed keypoints, and c)changed sampling area
Figure 3 presents the key steps for the whole procedure
of our proposed methods. It starts with cropping the given
pre- and post-event remote sensing images to make sure they
share the same extent, which is necessary for change detec-
tion. Based on the cropped pre- and post-event images, sam-
pling areas for extracting unchanged training samples (Fig-
ure 3a and b) and changed training samples(Figure 3c) were
generated. The sampling area for extracting the unchanged
training samples consists of one (Figure 3a) that was based
on vegetation extraction, and another one (Figure 3b) that re-
lied on the scale-invariant feature transform (SIFT) extraction
and filtering. The sampling area used to obtain changed train-
ing samples (Figure 3c) was derived from density analysis
of unchanged keypoints. With the identified sampling areas,
training samples were collected and were used to train a su-
pervised change detection model. Finally, the trained model
was applied to the whole cropped image scenes to create the
6907
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
map of the impacted area. For the remaining part of this sec-
tion, details regarding key components will be provided.
3.1. Unchanged vegetation sampling area generation
In this study, we simply treat the area as unchanged area if this
area is covered by vegetation in both pre- and post-event im-
ages. The vegetation masks for the pre- and post-event images
were generated by applying a threshold to the NDVI value. A
threshold of 0.3 was adopted in this study, since the vegeta-
tion mask generated with this threshold covers most part of
the vegetation in the study area without introducing excessive
false positives. The pre-event vegetation mask was overlaid
on the post-event vegetation mask to generate the overlapping
vegetation mask by identifying the pixels that are located in
both pre- and post-event vegetation masks.
3.2. Unchanged keypoint generation
In this study, the common procedure of using SIFT to ex-
tract unchanged keypoints was implemented, which includes
SIFT keypoint extraction, keypoint matching with Lowe’s ra-
tio test [9], keypoints filtering by random sample consensus
(RANSAC). There are 6195 unchanged keypoints identified
in the study area, which is shown in Figure4 using the post-
event image as the background. Our visual inspection indi-
cates the majority of the unchanged points were generated in
the area free from the impact of the tornado, with very few
of them falling into the impacted area. Those small amount
of undesired keypoints are not a problem, since our workflow
will remove them later on.
Fig. 4. Unchanged keypoints generated by SIFT and filtered
with RANSAC algorithm
3.3. Changed sampling area generation
Based on the observation that the unchanged keypoints in the
impacted area is substantially sparse than the keypoints in
the unaffected area, we developed a density-based algorithm
to generate sampling the area that was used to extract the
changed training samples. To that end, a point density map
was generated by finding the total number of points within a
radius of 500 meter for each pixel in the study area. We used
500 meters because of the assumption that the tornado can
affect land features that are as far as 500 meters away from
its center line of track with the average tornado as 150 m ra-
dius [10]. The density map underlying the unchanged key-
points is shown in Figure 5. With the density map, a thresh-
old 15 was used to generate a binary mask as the preliminary
changed sampling area showing the pixel locations with un-
changed keypoint density less than 15. Due to the volatility
of shape of vegetation resulting from growth, wind, and sea-
soning change, unchanged point tends to become sparse in
vegetation area. In addition, the edge effect also decreases
the density of points for area near to the boundary of study
area. To remove the false changed sampling area from pre-
liminary changed sampling area incurred by those two fac-
tors, the union of masks was created that contains the over-
lapping vegetation mask generated in section 3.1, and inward
500-meter wide buffering mask. The resulted mask was used
to mask out the false changed sampling area to finalize the
changed sampling area.
3.4. Unchanged sampling area generation
The unchanged vegetation sampling area generated from sec-
tion 3.1 was further refined by removing the pixels that fall
into the changed sampling area. The keypoints derived from
section 3.2 was further narrowed by removing the points that
are found in the changed sampling area or within the 500 me-
ter wide outward buffering area of changed sampling area.
Then, for each of the remaining unchanged points, a circle
with 50 meter as radius was generated to expand the sampling
area defined by unchanged keypoints. This is based on the as-
sumption that land features within 50 meters of unchanged
keypoints are neither affected by tornado. The final sampling
area for extracting the changed and unchanged training sam-
ple is shown in Figure 6.
Fig. 5. Unchanged keypoints overlaid onto point density map.
The brighter the density map, the higher the point density
Fig. 6. Sampling area (red color represents changed sampling
area, purple color corresponds to unchanged sampling area
generated by unchanged keypoints, and green color shows the
unchanged vegetation area. )
6908
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
3.5. Change map generation
In this study, change detection model proposed in [5] was
adopted. To make the model robust to image misalignment
and computationally efficient for change detection, the author
proposed to adopt the image objects as analysis unit instead
of pixel that is used by most of existing methods. The objects
were generated by segmenting each of the 40 pre-event image
patches (size for each is 2000x2000) using an off-shelf image
segmentation algorithm Quickshift.
The generated objects were overlaid onto the sampling
area image (Figure 6) to extract the object-based training sam-
ples. While all the samples generated from Figure 3b and c
were used in training, only 50% of randomly selected samples
from Figure 3a were used for training in order to mitigate im-
balanced training samples as well as shorten the training time.
After the training was finished, the trained model was applied
to all the objects within the area to detect changes after the tor-
nado. All the components in Figure3 that were conducted on
image patch of 2000x2000 pixels were implemented in paral-
lel for 40 image patches covering the whole study area. Start-
ing from image scenes and ended with change map, the whole
procedure was able to run without the need of any human in-
tervention, and took 275 minutes in total to finish.
4. RESULTS AND CONCLUSION
Figure 7 presents the map of the impacted area automatically
generated by our proposed method. The visual evaluation in-
dicates the map matches well with the moving trace of the
tornado in Joplin shown in the post-event image. To quantita-
tively evaluate the quality of the map, ground truth data col-
lected by FEMA is overlaid on top of the map. As mentioned
in section 2, only the major damaged and destroyed buildings
identified by FEMA were used in the quantitative evaluation.
Figure 8 shows our map aligns well with the FEMA ground
truth data with 3722 out of 4319 assessed structures that are
declared as destroyed and damaged covered in the detected
impacted area. We have shown the proposed framework can
Fig. 7. Map of Impacted Area
successfully derive the impacted area without labelling efforts
while deliver favorable results for the tornado case study. In
the future, we will test our workflow on the areas impacted by
other types of nature disaster, such as hurricane, earthquake,
Fig. 8. Impacted area overlaid with FEMA ground truth data
fire and flooding to show the generalization capability of this
work.
5. ACKNOWLEDGEMENTS
This manuscript has been authored by UT-Battelle, LLC, under con-
tract DE-AC05-00OR22725 with the US Department of Energy (DOE).
The US government retains and the publisher, by accepting the arti-
cle for publication, acknowledges that the US government retains a
nonexclusive, paid-up, irrevocable, worldwide license to publish or
re-produce the published form of this manuscript, or allow others to
do so, for US government purposes. DOE will provide public access
to these results of federally sponsored research in accordance with
the DOE Public Access Plan(http://energy.gov/download-s/doe-public-
access-plan).
6. REFERENCES
[1] NOAA, “U.S. Climate Extremes Index (CEI): Graph,”
http:https://www.ncdc.noaa.gov/extremes/cei/
graph/us/cei/01-12, 2020, [Online; accessed 10-January-
2020].
[2] Anuj Karpatne, Zhe Jiang, Ranga Raju Vatsavai, Shashi Shekhar, and
Vipin Kumar, “Monitoring land-cover changes: A machine-learning
perspective, IEEE Geoscience and Remote Sensing Magazine, vol. 4,
no. 2, pp. 8–21, 2016.
[3] Lichao Mou, Lorenzo Bruzzone, and Xiao Xiang Zhu, “Learning
spectral-spatial-temporal features via a recurrent convolutional neural
network for change detection in multispectral imagery, IEEE Trans-
actions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 924–935,
2018.
[4] Wuxia Zhang and Xiaoqiang Lu, “The spectral-spatial joint learning
for change detection in multispectral imagery, Remote Sensing, vol.
11, no. 3, pp. 240, 2019.
[5] Tao Liu, Lexie Yang, and Dalton D Lunga, “Towards misregistration-
tolerant change detection using deep learning techniques with object-
based image analysis,” in Proceedings of the 27th ACM SIGSPATIAL
International Conference on Advances in Geographic Information Sys-
tems, 2019, pp. 420–423.
[6] Bo Peng, Xinyi Liu, Zonglin Meng, and Qunying Huang, “Urban flood
mapping with residual patch similarity learning,” in Proceedings of the
3rd ACM SIGSPATIAL International Workshop on AI for Geographic
Knowledge Discovery, 2019, pp. 40–47.
[7] FEMA, Historical Damage Assessment Database, 2019 (accessed Jan-
uary 3, 2020).
[8] FEMA, Damage Assessment Operations Manual, 2016 (accessed Jan-
uary 3, 2020).
[9] David G Lowe, “Distinctive image features from scale-invariant key-
points,” International journal of computer vision, vol. 60, no. 2, pp.
91–110, 2004.
[10] Kevin Hile, The handy Weather answer book.
6909
Authorized licensed use limited to: Michigan Technological University. Downloaded on August 05,2021 at 22:13:33 UTC from IEEE Xplore. Restrictions apply.
... In addition, site I and site II were also used to compare the proposed method with their pixel-based counterparts regarding accuracy and computation efficiency. To further verify the effectiveness of the proposed method in real world applications, we applied our proposed method as a component in a fully automatic change detection approach (Liu and Yang, 2020) to map the impacted area caused by natural disaster. ...
... Therefore, we selected data points that are categorized as Major and Destroyed in this database as our reference. It should be noted that the reference data collected by FEMA was only used for evaluating the performance of the proposed change detection model rather than providing the training samples to the model, as they were provided by an automatic sample generation procedure in Liu and Yang (2020). With this test site, we focused on exploiting the proposed change detection approach to map the locations of changes caused by the tornado. ...
... The changed sampling area was generated based on the observation that the density of unchanged keypoints detected by the SIFT features in the changed area is much lower than the unchanged area. Readers are referred to Liu and Yang (2020) for details of the sampling area generation procedure. ...
Article
In their applications, both deep learning techniques and object-based image analysis (OBIA) have shown better performance separately than conventional methods on change detection tasks. However, efforts to investigate the effect of combining these two techniques for advancing change detection techniques are unexplored in current literature. This study proposes a novel change detection method implementing change feature extraction using convolutional neural networks under an OBIA framework. To demonstrate the effectiveness of our proposed method, we compare the proposed method against benchmark pixel-based counterparts on aerial images for the task of multi-class change detection. To thoroughly assess the performance of our proposed method, this study also for the first time compared three common feature fusion schemes for change detection architecture: concatenation, differencing, and Long Short-Term Memory (LSTM). The proposed method was also tested on simulated misregistered images to evaluate its robustness, a factor that plays an important role in compromising change detection accuracy but has not been investigated for supervised change detection methods in the literature. Finally, the proposed change detection method was also tested using very high resolution (VHR) satellite images for binary class change detection to map an impacted area caused by natural disaster and the result was evaluated using reference data from the Federal Emergency Management Agency (FEMA). With the experimental results from these two sets of experiments, we showed that (1) our proposed method achieved substantially higher accuracy and computational efficiency when compared to pixel-based methods, (2) three feature fusion schemes did not show a significant difference for overall accuracy, (3) our proposed method was robust in image misregistration in both testing and training data, (4) we demonstrate the potential impact of automation to decision making by deploying our method to map a large geographic area affected by a recent natural disaster.
... A similar post-processing strategy via objects can be also applied to maps generated by semantic segmentation models [72,75]. OBIA became popular since its purposeful introduction to address landcover mapping tasks with VHR images [68,[75][76][77][78][79][80][81][82][83][84][85], but we observe that the DL semantic segmentation approach, as described below, shows a tendency to replace the OBIA for VHR classification in the future. ...
Preprint
Full-text available
As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor.
... A similar post-processing strategy via objects can be also applied to maps generated by semantic segmentation models [72,75]. OBIA became popular since its purposeful introduction to address landcover mapping tasks with VHR images [68,[75][76][77][78][79][80][81][82][83][84][85], but we observe that the DL semantic segmentation approach, as described below, shows a tendency to replace the OBIA for VHR classification in the future. ...
Article
Full-text available
As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor.
... The images are worldview-2 satellite images with 0.46 meter resolution and they consist of RGB and near-infrared channels. The deep learning-based method proposed in [17,26] was used to generate the change map in this study. This method utilizes two separate but identical CNN (convolutional neural network) modules to extract the features from pre-and post-event image patches and those features are concatenated before being fed into the classifier to generate the change type. ...
Article
Full-text available
Human encroachment into wildlands has resulted in a rapid increase in wildland–urban interface (WUI) expansion, exposing more buildings and population to wildfire risks. More frequent mapping of structures and WUIs at a finer spatial resolution is needed for WUI characterization and hazard assessment. However, most approaches rely on high-resolution commercial satellite data with a particular focus on urban areas. We developed a deep learning framework tailored for building footprint detection in the transitional wildland–urban areas. We leveraged meter scale aerial imageries publicly available from the National Agriculture Imagery Program (NAIP) every 2 years. Our approach integrated Mobile-UNet and generative adversarial network. The deep learning models trained over three counties in California performed well in detecting building footprints across diverse landscapes, with an F1 score of 0.62, 0.67, and 0.75 in the interface WUI, intermix WUI, and rural regions, respectively. The bi-annual mapping captured both housing expansion and wildfire-caused building damages. The 30 m WUI maps generated from these finer footprints showed more granularity than the existing census tract-based maps and captured the transition of WUI dynamics well. More frequent updates of building footprint and improved WUI mapping will improve our understanding of WUI dynamics and provide guidance for adaptive strategies on community planning and wildfire hazard reduction.
Article
Natural disasters frequently have caused a huge impact on life and property losses, in Southwest China. To provide assistance for disaster relief, areas damaged in natural disasters are quickly located by utilizing satellite remote-sensing images-based deep-learning object detection technology. However, the current detection technology, for the detection of damaged objects discretely in the disaster area, has some challenges, such as partial missing of multisource images and extremely sparse targets with weak features or occlusion at large scales. Furthermore, we propose an object detection network based on the dynamic extraction of multisource image features to solve the above problems. To train our proposed network, we collect multisource remote-sensing images before and after the disaster. Finally, it is verified that when the detection error rate is less than 5%, the accuracy of the detection model reaches more than 85%.
Article
Full-text available
Change detection is one of the most important applications in the remote sensing domain. More and more attention is focused on deep neural network based change detection methods. However, many deep neural networks based methods did not take both the spectral and spatial information into account. Moreover, the underlying information of fused features is not fully explored. To address the above-mentioned problems, a Spectral-Spatial Joint Learning Network (SSJLN) is proposed. SSJLN contains three parts: spectral-spatial joint representation, feature fusion, and discrimination learning. First, the spectral-spatial joint representation is extracted from the network similar to the Siamese CNN (S-CNN). Second, the above-extracted features are fused to represent the difference information that proves to be effective for the change detection task. Third, the discrimination learning is presented to explore the underlying information of obtained fused features to better represent the discrimination. Moreover, we present a new loss function that considers both the losses of the spectral-spatial joint representation procedure and the discrimination learning procedure. The effectiveness of our proposed SSJLN is verified on four real data sets. Extensive experimental results show that our proposed SSJLN can outperform the other state-of-the-art change detection methods.
Conference Paper
Co-registrating is a common pre-processing step for existing change detection algorithms, but registering bi-temporal images is nontrivial. The use of image patch as input for deep learning techniques provides a natural avenue to apply them in the OBIA framework, and have shown successful performance in the object-based land cover mapping and change detection applications. Even though attempts of applying deep learning techniques for change detection applications have been made with varying success, its application under OBIA framework for change detection have not been conducted and its tolerance for misregistration among temporal images are neither known. This study performed change detection under OBIA framework using deep learning techniques for the first time, and evaluated its performance regarding their tolerance of image misregistration on training and testing dataset. Our results demonstrate the proposed change detection scheme is surprisingly robust to image misregistration on the testing dataset, while classifiers trained with the training dataset containing image misregistration errors suffer from slight decrease of overall accuracy.
Conference Paper
Urban flood mapping is essential for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Much progress has been made to map the extent of flooding with multi-source remote sensing imagery and pattern recognition algorithms. However, urban flood mapping at high spatial resolution remains a major challenge due to three main reasons: (1) the very high resolution (VHR) optical remote sensing imagery often has heterogeneous background involving various ground objects (e.g., vehicles, buildings, roads, and trees), making traditional classification algorithms fail to capture the underlying spatial correlation between neighboring pixels within the flood hazard area; (2) traditional flood mapping methods with handcrafted features as input cannot fully leverage massive available data, which requires robust and scalable algorithms; and (3) due to inconsistent weather conditions at different time of data acquisition, pixels of the same objects in VHR optical imagery could have very different pixel values, leading to the poor generalization capability of classical flood mapping methods. To address this challenge, this paper proposed a residual patch similarity convolutional neural network (ResPSNet) to map urban flood hazard zones using bi-temporal high resolution (3m) pre- and post-flooding multispectral surface reflectance satellite imagery. Besides, remote sensing specific data augmentation was also developed to remove the impact of varying illuminations due to different data acquisition conditions, which in turn further improves the performance of the proposed model. Experiments using the high resolution imagery before and after the 2017 Hurricane Harvey flood in Houston, Texas, showed that the developed ResPSNet model, along with associated remote sensing specific data augmentation method, can robustly produce flood maps over urban areas with high precision (0.9002), recall (0.9302), F1 score (0.9128), and overall accuracy (0.9497). The research sheds light on multitemporal image fusion for high precision image change detection, which in turn can be used for monitoring natural hazards.
Article
Change detection is one of the central problems in earth observation and was extensively investigated over recent decades. In this paper, we propose a novel recurrent convolutional neural network (ReCNN) architecture, which is trained to learn a joint spectral-spatial-temporal feature representation in a unified framework for change detection in multispectral images. To this end, we bring together a convolutional neural network (CNN) and a recurrent neural network (RNN) into one end-to-end network. The former is able to generate rich spectral-spatial feature representations, while the latter effectively analyzes temporal dependency in bi-temporal images. In comparison with previous approaches to change detection, the proposed network architecture possesses three distinctive properties: 1) It is end-to-end trainable, in contrast to most existing methods whose components are separately trained or computed; 2) it naturally harnesses spatial information that has been proven to be beneficial to change detection task; 3) it is capable of adaptively learning the temporal dependency between multitemporal images, unlike most of algorithms that use fairly simple operation like image differencing or stacking. As far as we know, this is the first time that a recurrent convolutional network architecture has been proposed for multitemporal remote sensing image analysis. The proposed network is validated on real multispectral data sets. Both visual and quantitative analysis of experimental results demonstrates competitive performance in the proposed mode.
Article
Monitoring land-cover changes is of prime importance for the effective planning and management of critical, natural and man-made resources. The growing availability of remote sensing data provides ample opportunities for monitoring land-cover changes on a global scale using machine-learning techniques. However, remote sensing data sets exhibit unique domain-specific properties that limit the usefulness of traditional machine-learning methods. This article presents a brief overview of these challenges from the perspective of machine learning and discusses some of the recent advances in machine learning that are relevant for addressing them. These approaches show promise for future research in the detection of land-cover change using machine-learning algorithms.
Article
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.
The handy Weather answer book
  • Kevin Hile
Kevin Hile, The handy Weather answer book.
Historical Damage Assessment Database
FEMA, Historical Damage Assessment Database, 2019 (accessed January 3, 2020).
Assessment Operations Manual
  • Damage Fema
FEMA, Damage Assessment Operations Manual, 2016 (accessed January 3, 2020).