Fig 8 - uploaded by F. Yamazaki
Content may be subject to copyright.
Closer look at collapsed buildings map. Left column corresponds to region (i), and right column corresponds to region (ii); both are shown in Fig. 3b.

Closer look at collapsed buildings map. Left column corresponds to region (i), and right column corresponds to region (ii); both are shown in Fig. 3b.

Source publication
Article
Full-text available
Remote sensing satellite imagery plays an important role in estimating collapsed buildings in the aftermath of a large-scale disaster. However, some previous methodologies are restricted to using specific radar sensors. Others methods, such as machine learning algorithms, require training data, which are extremely difficult to obtain immediately af...

Context in source publication

Context 1
... separates by damage states. Buildings with damage states 0-4 are well classified as non-collapsed by the four fragility curves. Similarly, the main portion of buildings with a damage state of 6 are classified as collapsed buildings. The main discrepancies are observed in buildings with DS5, the majority of which are classified as non-collapsed. Fig. 8 shows a closer look at the collapsed building map from the surveyed data and that estimated by the proposed method. A good agreement between these data is ...

Citations

... It is believed that these other sources of information should be used together with remote sensing data to identify changes. In [41], the use of fragility functions, together with remote sensing data, was proposed to automatically classify buildings as either collapsed or non-collapsed. However, the proposed method was limited to only a two-dimensional feature space. ...
... The naive approach of using one fragility function is not harmful in cases in which there is a predominant building type in the study area. For instance, the fragility function of wooden buildings was used to successfully identify collapsed buildings during the 2011 Tohoku-Oki earthquake-tsunami [41,42]. For the current case study, Figure 4 shows the fragility function for low-rise concrete buildings with infill walls, together with the fragility functions for high-rise concrete buildings with infill walls, low-rise concrete buildings without infill walls, and high-rise concrete buildings without infill walls. ...
Article
Full-text available
Damage identification soon after a large-magnitude earthquake is a major problem for early disaster response activities. The faster the damaged areas are identified, the higher the survival chances of inhabitants. Current methods for damage identification are based on the application of artificial intelligence techniques using remote sensing data. Such methods require a large amount of high-quality labeled data for calibration and/or fine-tuning processes, which are expensive in the aftermath of large-scale disasters. In this paper, we propose a novel semi-supervised classification approach for identifying urban changes induced by an earthquake between images recorded at different times. We integrate information from a small set of labeled data with information from ground motion and fragility functions computed on large unlabeled data. A relevant consideration is that ground motion and fragility functions can be computed in real time. The urban changes induced by the 2023 Turkey earthquake sequence are reported as an evaluation of the proposed method. The method was applied to the interferometric coherence computed from C-band synthetic aperture radar images from Sentinel-1. We use only 39 samples labeled as changed and 9000 unlabeled samples. The results show that our method is able to identify changes between images associated with the effects of an earthquake with an accuracy of about 81%. We conclude that the proposed method can rapidly identify affected areas in the aftermath of a large-magnitude earthquake.
... Meanwhile, recent advances in machine learning (ML), with high-performance open-source libraries for training and evaluating models, have led to an increasing body of work that aims to use learning algorithms to predict damage following earthquakes. Broadly, these efforts can be classified into the following four categories: (i) ML models using building attributes and geophysical features alone (e.g., Mangalathu et al., 2020;Roeslin et al., 2020), (ii) ML models using optical EO data (e.g., Ji et al., 2018Ji et al., , 2019Ji et al., , 2020Lee et al., 2020;Xu et al., 2019;Tilon et al., 2020;see Nex et al., 2019, for an extensive review), (iii) ML models using SAR data alone (e.g., Wieland et al., 2016;Bai et al., 2017;Stephenson et al., 2021), and (iv) ML models using SAR EO data in conjunction with building attributes and geophysical features (e.g., Moya et al., 2018a, b;Xie et al., 2020). Roeslin et al. (2020) evaluated the performance of various ML classification algorithms to classify building damage for the 2017 Puebla, Mexico, earthquake, based on input features including structural attributes of the buildings and seismic demand in terms of maximum spectral acceleration. ...
Article
Full-text available
This article presents a framework for semi-automated building damage assessment due to earthquakes from remote-sensing data and other supplementary datasets, while also leveraging recent advances in machine-learning algorithms. The framework integrates high-resolution building inventory data with earthquake ground shaking intensity maps and surface-level changes detected by comparing pre- and post-event InSAR (interferometric synthetic aperture radar) images. We demonstrate the use of ensemble models in a machine-learning approach to classify the damage state of buildings in the area affected by an earthquake. Both multi-class and binary damage classification are attempted for four recent earthquakes, and we compare the predicted damage labels with ground truth damage grade labels reported in field surveys. For three out of the four earthquakes studied, the model is able to identify over 50 % or nearly half of the damaged buildings successfully when using binary classification. Multi-class damage grade classification using InSAR data has rarely been attempted previously, and the case studies presented in this report represent one of the first such attempts using InSAR data.
... With the expansion of recording networks of earthquakes (K-NET and KiK-net in Japan) and tsunamis (e.g., S-net in Japan), real-time recorded shaking and wave information could be employed. Moreover, recent availability of satellite imageries and semi-automated image processing, combined with machine learning techniques, could be exploited to develop multi-hazard rapid impact assessment tools (Moya et al., 2018;Naito et al., 2020). The fusion of remote sensing technology and advanced data analytics is a promising research field for postdisaster hazard monitoring and risk management (Voigt et al., 2016). ...
Article
Full-text available
Probabilistic risk models for natural hazards, or natural catastrophe models, are indispensable tools for forecasting and quantifying the impacts of cascading and compounding earthquake-tsunami hazards. Their applications facilitate improved disaster risk mitigation and management. Uncertainties associated with forecasted multi-hazard impacts can be substantial, and practitioners and policymakers need guidance on implementing disaster risk reduction actions at all levels (local, regional, national, and international). In communicating such broad ranges of possible consequences with stakeholders, disaster scenarios need to be carefully selected and presented. This article reviews the state-of-the-art of earthquake, tsunami, and earthquake-tsunami catastrophe modelling and discusses future perspectives for earthquake-tsunami risk assessments.
... Recently, the integration of inplace sensors and remote sensors to identify damage in the infrastructure has been proposed. Tsunami inundation depth, fragility curves, and satellite radar images have been used to identify collapsed buildings [4,5]. Furthermore, collapsed buildings during the 2016 Kumamoto earthquake were identified using digital elevation models and strong motion parameters [6]. ...
Article
Full-text available
In recent years, the development of seismic networks in Metropolitan Lima, administrated by public and private institutions, has received special attention since it makes possible the quantification of different seismic indexes under the occurrence of earthquakes. Therefore, the integration of the information both from acceleration sensors and site conditions from microzoning studies allows the estimation of the possible extent of the damage in quasi-real time. In this study, the implementation of a system to evaluate seismic parameters in a uniform grid of 250 x 250 m2 resolution is reported. In this regard, peak ground acceleration (PGA) values from the available time-history records are computed and reduced to the engineering bedrock level. Then, by means of the interpolation technique called Ordinary Kriging, in which each seismic station is considered as a random variable and the correlation between a pair of such random variables depends only on the distance between their coordinates, the acceleration distribution is evaluated. Amplification factors are applied in order to finally bring the PGA up to the surface level. A quantitative evaluation of the accuracy of our results is performed using two recent earthquakes with moment magnitude larger than 5: the 2019 Mw 8.0 Lagunas earthquake and 2021 Mw 6.0 Mala earthquake. The results have reproduced to some extent the seismic response of the diverse geomorphological deposits in Metropolitan Lima and suggest the inclusion of a larger number of strong motion stations in order to reduce the estimation errors.
... ARTHQUAKES can cause substantial damage in a short time. It is of great significance to carry out emergency rescue by capturing the distribution of a disaster in real time after an earthquake [1][2][3]. In recent years, remote sensing images have gradually become the main data for disaster detection in the process of emergency rescue due to their advantages of wide coverage, large amounts of information and short processing times. ...
Article
Damaged building detection from remote sensing imagery helps to quickly and rapidly assess losses after an earthquake. In recent years, deep learning technology has become a favorable tool for remote sensing image information detection. Based on the characteristics of damaged buildings in remote sensing images, in this paper, a framework for damaged building detection that considers heterogeneity characteristics is proposed. First, a local-global context attention module is proposed to improve the feature detection ability of the network, which can extract the features of damaged buildings from different directions and effectively aggregate global and local features. In addition, the module takes the correlation between feature maps at different scales into account while extracting information. Second, a feature fusion module with self-attention is established to replace the simple connection between the encoding and decoding processes, which improves the detail feature recovery ability of the network during the upsampling process. Finally, to fully aggregate semantic and detail features at different scales, a multibranch auxiliary classifier is established by adding two separate branches in the prediction stage. The effectiveness of the proposed approach is verified based on data from the 2010 Haiti earthquake, and comparisons with 3 object-oriented methods and 16 existing excellent deep learning models are performed. The IOU increase of 0.03%-7.39% is achieved using the proposed approach compared with excellent deep learning models.
... We show that the fusion brings great benefits, such as the overcome of training data problem and the mapping of actual damage in near-real-time. The next chapter summarizes our first attempt of the fusion [31]. Chapter 3 reports the subsequent upgrade of the method [32]. ...
... The results achieved an overall accuracy of 81.4% and 84.9% using the fragility functions provided by Koshimura et al. [33] and Suppasri et al. [34], respectively. Further details of the proposed method can be found in [31]. ...
... Schematic illustration of the flowchart of the proposed method. Source:[31]. ...
Conference Paper
Full-text available
Disaster risk analysis involves a set of disaster events, their consequences, and their probabilities of occurrence over a defined period. The main, or traditional role of disaster risk is to serve as a guide for decisions about safety. Whether the infrastructure requires retrofitting or whether having insurance is cost-effective are such examples. An important stage in estimating the disaster risk is the estimation of damage as a function of the demand (i.e., ground motion for earthquake or inundation depth for tsunamis). These damage functions have been playing the main role in one of the recent trends. With the technological progress in instrumentation, numerical modeling and communication networks, in the aftermath of a large-scale disaster, it is possible to estimate a demand map in real/near-real time. Then, with the aid of the damage functions, an estimation of the damage map can be computed. However, with damage functions, we can only compute the probability that an asset is damaged. The best that can be done with such information is to report the expected amount of damaged assets within a given area. It is not possible to indicate which asset was damaged. A more realistic estimation of damage due to an arbitrary disaster can be obtained from remote sensing data, such as satellite images. With remote sensing data recorded after a disaster (images recorded before a disaster are usually available as well), the real effect of the disaster can be detected. However, the procedure is not straightforward. Usually, a set of features are computed from remote sensing data for each asset. Such features are used as input of a discriminant function that predicts whether the asset is damaged or not. The discriminant function is best calibrated using machine learning methods, but they require training data. Training data is very difficult to collect right after a disaster because all the efforts are focused on rescue and relief activities. In this paper we summarize the studies we have performed regarding the fusion of the two referred disciplines to implement a fully automatic damage mapping procedure.
... The identification of damage to the urban area in the aftermath of a large-scale disaster is an important task in emergency response and recovery activities [1][2][3][4][5][6][7][8]. Satellite remote sensing, because its wide coverage, is probably the only way to inspect the completely affected area produced by a large disaster. ...
... Another solution involves avoiding the use of training data by using different constraints. In [25,26], the use of damage functions developed for disaster risk analysis and numerical models of the disaster were used to calibrate a machine learning classifier. ...
Article
Full-text available
When flooding occurs, Synthetic Aperture Radar (SAR) imagery is often used to identify flood extent and the affected buildings for two reasons: (i) for early disaster response, such as rescue operations, and (ii) for flood risk analysis. Furthermore, the application of machine learning has been valuable for the identification of damaged buildings. However, the performance of machine learning depends on the number and quality of training data, which is scarce in the aftermath of a large scale disaster. To address this issue, we propose the use of fragmentary but reliable news media photographs at the time of a disaster and use them to detect the whole extent of the flooded buildings. As an experimental test, the flood occurred in the town of Mabi, Japan, in 2018 is used. Five hand-engineered features were extracted from SAR images acquired before and after the disaster. The training data were collected based on news photos. The date release of the photographs were considered to assess the potential role of news information as a source of training data. Then, a discriminant function was calibrated using the training data and the support vector machine method. We found that news information taken within 24 h of a disaster can classify flooded and nonflooded buildings with about 80% accuracy. The results were also compared with a standard unsupervised learning method and confirmed that training data generated from news media photographs improves the accuracy obtained from unsupervised classification methods. We also provide a discussion on the potential role of news media as a source of reliable information to be used as training data and other activities associated to early disaster response.
... Recently, whether this aggregated damage information can replace training samples for the establishment of a damage map with a higher spatial resolution (building units instead of uniform spatial grids) from remote sensing data was investigated. A simple experiment from the 2011 Tohoku-Oki earthquake-tsunami is reported in [43], from which a linear discriminant function is calibrated over a bidimensional feature space via exhaustive search. The calibration implied to find a linear discriminant function that yields a damage scenario that is consistent with the aggregates that are computed from the demand parameter and the fragility function. ...
... Surprisingly, most of the samples that were labeled as collapsed were predicted as nonchanged. This controversy has been discussed in our previous studies [14], [43], [44]. The classification by MLIT was conducted in the context of the building's structural system. ...
Article
Full-text available
Previous applications of machine learning in remote sensing for the identification of damaged buildings in the aftermath of a large-scale disaster have been successful. However, standard methods do not consider the complexity and costs of compiling a training data set after a large-scale disaster. In this article, we study disaster events in which the intensity can be modeled via numerical simulation and/or instrumentation. For such cases, two fully automatic procedures for the detection of severely damaged buildings are introduced. The fundamental assumption is that samples that are located in areas with low disaster intensity mainly represent nondamaged buildings. Furthermore, areas with moderate to strong disaster intensities likely contain damaged and nondamaged buildings. Under this assumption, a procedure that is based on the automatic selection of training samples for learning and calibrating the standard support vector machine classifier is utilized. The second procedure is based on the use of two regularization parameters to define the support vectors. These frameworks avoid the collection of labeled building samples via field surveys and/or visual inspection of optical images, which requires a significant amount of time. The performance of the proposed method is evaluated via application to three real cases: the 2011 Tohoku-Oki earthquake-tsunami, the 2016 Kumamoto earthquake, and the 2018 Okayama floods. The resulted accuracy ranges between 0.85 and 0.89, and thus, it shows that the result can be used for the rapid allocation of affected buildings.
... A high dimensional feature space requires, however, a more complex discriminant function, and, as stated previously, its calibration requires training data. For the case of tsunami-based floods, near-real time frameworks that uses numerical tsunami models, statistical damage functions, and remote sensing is proposed in [32,33]. An additional, and more intuitive, solution for the lack of training data is the use of data collected from previous disasters. ...
Article
Full-text available
Applications of machine learning on remote sensing data appear to be endless. Its use in damage identification for early response in the aftermath of a large-scale disaster has a specific issue. The collection of training data right after a disaster is costly, time-consuming, and many times impossible. This study analyzes a possible solution to the referred issue: the collection of training data from past disaster events to calibrate a discriminant function. Then the identification of affected areas in a current disaster can be performed in near real-time. The performance of a supervised machine learning classifier to learn from training data collected from the 2018 heavy rainfall at Okayama Prefecture, Japan, and to identify floods due to the typhoon Hagibis on 12 October 2019 at eastern Japan is reported in this paper. The results show a moderate agreement with flood maps provided by local governments and public institutions, and support the assumption that previous disaster information can be used to identify a current disaster in near-real time.