The aftershock sequence of Landers earthquake, 28 June 1992, m = 7.3. Catalog of Hauksson et al. (2012); 66,682 earthquakes with m ≥ 0.0. (a) Time‐magnitude sequence. (b) Time‐latitude sequence. (c) Number of events in nonoverlapping time windows of 7 days. The aftershock sequence disturbs the entire seismic field for decades, affecting its intensity and space distribution.

The aftershock sequence of Landers earthquake, 28 June 1992, m = 7.3. Catalog of Hauksson et al. (2012); 66,682 earthquakes with m ≥ 0.0. (a) Time‐magnitude sequence. (b) Time‐latitude sequence. (c) Number of events in nonoverlapping time windows of 7 days. The aftershock sequence disturbs the entire seismic field for decades, affecting its intensity and space distribution.

Source publication
Article
Full-text available
We introduce an algorithm for declustering earthquake catalogs based on the nearest‐neighbor analysis of seismicity. The algorithm discriminates between background and clustered events by random thinning that removes events according to a space‐varying threshold. The threshold is estimated using randomized‐reshuffled catalogs that are stationary, h...

Citations

... When the distance between two events is larger than the cutoff threshold (ie the intervent distance that separates the modes), the two events are assumed to be in different clusters. In practice, this algorithm only requires the user to estimate and input three parameters ( d f , w, and 0 where 0 is the cluster threshold and is generally taken to be near zero for declustering purposes) two of which ( d f and w) do not strongly influence the output declustered catalog (Zaliapin and Ben-Zion 2020). In cases where t ij = 0 , that is two events occurred at the same point in time, the interevent distance is assumed to be infinite, indicating that the two events are neither parent nor offspring of each other. ...
Article
Full-text available
Declustering of earthquake catalogs, that is determining dependent and independent events in an earthquake sequence, is a common feature of many seismological studies. While many different declustering algorithms exist, each has different performance and sensitivity characteristics. Here, we conduct a comparative analysis of the five most commonly used declustering algorithms: Garnder and Knopoff (1974), Uhrhammer (1986), Reasenberg (J Geophys Res: Solid Earth 90(B7):5479–5495, 1985), Zhuang et al. (J Am Stat Assoc 97(458):369–380, 2002), and Zaliapin et al. (Phys Rev Lett 101(1):4–7, 2008) in four different tectonic settings. Overall, we find that the Zaliapin et al. (Phys Rev Lett 101(1):4–7, 2008) algorithm effectively removes aftershock sequences, while simultaneously retaining the most information (i.e. the most events) in the output catalog and only slightly modifying statistical characteristics (i.e. the Gutenberg Richter b-value). Both Gardner and Knopoff (1974) and Zhuang et al. (J Am Stat Assoc 97(458):369–380, 2002) also effectively remove aftershock sequences, though they remove significantly more events than the other algorithms. Uhrhammer (1986) also effectively removes aftershock sequences and removes fewer events than Gardner and Knopoff (1974) or Zhuang et al. (J Am Stat Assoc 97(458):369–380, 2002), except when large magnitude events are present. By contrast, Reasenberg (J Geophys Res: Solid Earth 90(B7):5479–5495, 1985) only effectively removed aftershocks in one of the test regions.
... Anderson and Zaliapin (2023) applied a seismicity model derived from the catalog of spatial and temporal criteria proposed by Petersen et al. (2020). The analysis consists of spatial and temporal declustering, which involves separating entities or variables grouped in space and time, as pointed out by Zhuang et al. (2002) and others (Gardner and Knopoff 1974;Savage 1972;Reasenberg 1985;Zaliapin and Ben-Zion, 2020;Ogata 1998;Zhuang et al. 2002;Marsan and Lenglin� e 2008;Chu et al. 2011;Teng and Baker 2019;Llenos and Michael 2020;Mizrahi et al., 2021;Wang et al., 2022). It is followed by spatial smoothing, pattern highlighting, and noise reduction within the spatial data. ...
Article
Full-text available
This study uses complete earthquake catalog data and spatio-temporal analysis to construct a reliable model to forecast the potential seismogenic earthquake or earthquake fault zones. It integrates models developed based on different researchers’ methods and earthquake catalogs from different periods. It constructs and compares models - Model-1, Model-2, and Model-3 - from the complete shallow earthquake catalog between 1963-1999 and 1963-2006. The δAIC is used to evaluate the reliability of the models, with Model-3 emerging as the most reliable in all tests in this study. The model is constructed based on the product of the normalized model of the combined smooth seismicity model of a relatively small to moderate complete earthquake catalog data with a relatively uniform background model and weighted by the normalized seismic moment rate derived from the surface strain rate. It is suggested that a more extended observation period and using a complete, albeit relatively small-to-moderate, earthquake catalog leads to a more reliable and accurate model. Implementation of the Probabilistic Seismic Hazard Function (PSHF) window using the b-value of a 5-year window length with a 1-year sliding window prior to a significant seismic event proved successful, and the methodology demonstrates the importance of the temporal "b-value" in conjunction with the reliable seismicity rate and spatial probabilistic earthquake forecasting models in earthquake forecasting. The results showed large changes in the PSHF prior to giant and large earthquakes and the finding of a correlation between decreased b-value time window length and earthquake magnitude. The results have implications for the implementation of seismic mitigation measures.
... In order to classify events (either earthquakes or avalanches) into main events and aftershocks, it is commonly employed the method proposed by Baiesi and Paczuski [52,53]. This method is based in the definition of the proximity η ij in space-time-magnitude domain from an event j to a previous (in time) event i [52,54,55]. Assuming that events are ordered in time, t 1 < t 2 < t 3 · · · , the proximity is defined as ...
... In addition, we consider the measure of clustering proposed within this framework in Ref. [54]. This formalism is based in the rescaled time T j and rescaled space R j [54,55], defined as ...
... One mode corresponds to background events, and is compatible with a random (Poisson) distribution of times and positions of events. The other mode, on the other hand, corresponds to clustered events occurring closer in space and time [55]. ...
Preprint
Full-text available
Groups of animals are observed to transmit information across them with propagating waves or avalanches of behaviour. These behavioral cascades often display scale-free signatures in their duration and size, ranging from activating a single individual to the whole group, signatures that are commonly related to critical phenomena from statistical physics. A particular example is given by turning avalanches, where large turns in the direction of motion of individuals are propagated. Employing experimental data of schooling fish, we examine characteristics of spontaneous turning avalanches and their dependency with schools of different number of individuals. We report self-similar properties in the avalanche duration, size and inter-event time distributions, as well as in the avalanche shape. We argue that turning avalanches are a result of collective decision-making processes to select a new direction to move. They start with the group having low speed and decreasing the coordination, but once a direction is chosen, speed increases and coordination is restored. We report relevant boundary effects given by wall interactions and by individuals at the border of the group. We conclude investigating spatial and temporal correlations using the concept of aftershocks from seismology. Contrary to earthquakes, turning avalanches display statistically significant clustered events only below a given time scale and follow an Omori law for aftershocks with a faster decay rate exponent than that observed in real earthquakes.
... The techniques to identify aftershocks have included space-time window-based approaches [7][8][9], stochastic declustering of earthquakes modeled as a point process [10], evolving random graphs [11], machine learning techniques such as diffusion maps [12] and neural networks [13], and a nearest-neighbor (NN) rescaled space-time-magnitude metric [14,15]. This last method has seen wider adoption in seismology [4,[16][17][18][19][20]. The NN metric combines the phenomenological Gutenberg-Richter (GR) law with the fact that earthquake epicenters have a fractal distribution and is an estimate of the number of expected events within a certain radius, time, and magnitude range [14]. ...
Article
Full-text available
Earthquakes are complex physical processes driven by stick-slip motion on a sliding fault. After the main event, a series of aftershocks is usually observed. The latter are loosely defined as earthquakes that follow a parent event and occur within a prescribed space-time window. In seismology, it is currently not possible to establish an unambiguous causal relation between events, and the nearest-neighbor metric is commonly used to distinguish aftershocks from independent events. Here, we employ a soft-glass model as a proxy for earthquake dynamics, previously shown to be able to correctly reproduce the phenomenology of earthquakes, together with a technique that allows us to clearly separate independent and triggered events. We show that aftershocks in our plastic event catalog follow Omori's law with slopes depending on the triggering mode, an observation possibly useful for seismology. Finally, we confirm that the nearest-neighbor metric is indeed effective in separating independent events from aftershocks.
... method and it was first proposed by Baiesi and Paczuski [46] and considerably expanded by Zaliapin et al. [47], Zaliapin and Ben-Zion [48,49]. It was also adapted to perform declustering of earthquake catalogs [50]. The NND method was also applied to analyze the seismicity in the Western Canada Sedimentary Basin due to a significant increase in induced seismicity observed in the last decade [51][52][53]. ...
... To study the clustering aspects of natural or induced seismicity one can define a rescaled distance between pairs of events in a hyperspace, which is spanned by the spatial, temporal, and magnitude domains. In this respect, the problem of earthquake classification and clustering is addressed in the works by Baiesi and Paczuski [46], Zaliapin et al. [47], Zaliapin and Ben-Zion [48][49][50] and is based on minimizing the rescaled relative distance between events and is known as the nearest-neighbor distance (NND) method. ...
Article
Full-text available
Microseismicity is expected in potash mining due to the associated rock-mass response. This phenomenon is known, but not fully understood. To assess the safety and efficiency of mining operations, producers must quantitatively discern between normal and abnormal seismic activity. In this work, statistical aspects and clustering of microseismicity from a Saskatchewan, Canada, potash mine are analyzed and quantified. Specifically, the frequency-magnitude statistics display a rich behavior that deviates from the standard Gutenberg-Richter scaling for small magnitudes. To model the magnitude distribution, we consider two additional models, i.e., the tapered Pareto distribution and a mixture of the tapered Pareto and Pareto distributions to fit the bi-modal catalog data. To study the clustering aspects of the observed microseismicity, the nearest-neighbor distance (NND) method is applied. This allowed the identification of potential cluster characteristics in time, space, and magnitude domains. The implemented modeling approaches and obtained results will be used to further advance strategies and protocols for the safe and efficient operation of potash mines.
... Davis and Cliff [13] presented a unified relationship between earthquakes and the parameters of the space-time window, while Chen et al. [8] determined aftershocks by considering the length of the fault and the time window specified by Console et al. [11]. In recent years, some other methods have also been proposed, such as stochastic decluttering based on point processes [14], nearest-neighbor distances of events in the space-time-energy domain [15][16][17], narrow spatial aftershock zones [18], and nonparametric network-tree aftershock identification [19]. Most of these methods are based on temporal and spatial relations and can quickly remove aftershocks from the entire earthquake catalog. ...
Article
Full-text available
The existence of aftershocks in an earthquake sequence can impact the analysis of the mainshock. In this study, we present a method for deleting an aftershock sequence based on the spatial relationship between earthquakes and faults. This method improves the performance of space window selection in the classical K-K method by eliminating aftershocks with an ideal fault buffer zone. The determination of fault buffer zones is based on a trial-and-error analysis of 69,714 earthquake records from the China Seismic Network Center (CENC) collected between 1980 and 2020. We selected 20 typical big earthquakes (M L 7.0-8.0 or ∼Ms6.6-8.0; for earthquakes above magnitude Ms7 or M L 7.2, M L is approximately equal to Ms) as the mainshocks to establish the fault buffer zones. We also propose an empirical formula to determine the distance of the fault buffer zone by counting the aftershock deletion effect at different buffer distances. Compared with the classical K-K method, our method considers the correlation between the spatial distribution of aftershocks and faults, eliminates earthquake groups that are not related to the mainshock, greatly reduces the spatial range of aftershocks, improves the performance of deleting aftershocks of different magnitudes, and provides a new rule and reference for aftershock deletion.
... where ∆r ij is the epicentral distance between the two earthquakes and δ = 1.6 is the typical fractal geometry of earthquake epicenters. Eq. (2) originates from reasonable assumptions about the underlying organization of ancestors in space, time and magnitude [27] and is the basis of one of the most efficient techniques for the discrimination of ancestors from off-springs [28][29][30]. We remark that the Heaviside Θ function in Eq. (2) comes from causality according to which an earthquake i cannot be triggered by a future earthquake j. ...
Article
Clustering represents a fundamental procedure to provide users with meaningful insights from an original data set. The quality of the resulting clusters is largely dependent on the correct estimation of their number, K∗, which must be provided as an input parameter in many clustering algorithms. Only very few techniques provide an automatic detection of K∗ and are usually based on cluster validity indexes which are expensive with regard to computation time. Here, we present a new algorithm which allows one to obtain an accurate estimate of K∗, without partitioning data into the different clusters. This makes the algorithm particularly efficient in handling large-scale data sets from both the perspective of time and space complexity. The algorithm, indeed, highlights the block structure which is implicitly present in the similarity matrix, and associates K∗ to the number of blocks in the matrix. We test the algorithm on synthetic data sets with or without a hierarchical organization of elements. We explore a wide range of K∗ and show the effectiveness of the proposed algorithm to identify K∗, even more accurate than existing methods based on standard internal validity indexes, with a huge advantage in terms of computation time and memory storage. We also discuss the application of the novel algorithm to the de-clustering of instrumental earthquake catalogs, a procedure finalized to identify the level of background seismic activity useful for seismic hazard assessment.
... The analysis of the strong spatiotemporal clustering of earthquakes is one of the most treated subjects in statistical seismology (e.g., [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] and several other references). ...
... This method is based on the NN distance between pairs of earthquakes in a specific space-time-magnitude domain [10,28,40,44,45]. It is based on the asymmetric distance η ij between earthquakes i and j defined as: ...
... where log η ij = log T ij + log R ij and q + p = 1. Each event j is associated with its unique parent i by the following NN proximity [10]: ...
Article
Full-text available
The main purpose of this paper was to, for the first time, analyse the spatiotemporal features of the background seismicity of Northern Algeria and its vicinity, as identified by different declustering methods (specifically: the Gardner and Knopoff, Gruenthal, Uhrhammer, Reasenberg, Nearest Neighbour, and Stochastic Declustering methods). Each declustering method identifies a different declustered catalogue, namely a different subset of the earthquake catalogue that represents the background seismicity, which is usually expected to be a realisation of a homogeneous Poisson process over time, though not necessarily in space. In this study, a statistical analysis was performed to assess whether the background seismicity identified by each declustering method has the spatiotemporal properties typical of such a Poisson process. The main statistical tools of the analysis were the coefficient of variation, the Allan factor, the Markov-modulated Poisson process (also named switched Poisson process with multiple states), the Morisita index, and the L–function. The results obtained for Northern Algeria showed that, in all cases, temporal correlation and spatial clustering were reduced, but not totally eliminated in the declustered catalogues, especially at long time scales. We found that the Stochastic Declustering and Gruenthal methods were the most successful methods in reducing time correlation. For each declustered catalogue, the switched Poisson process with multiple states outperformed the uniform Poisson model, and it was selected as the best model to describe the background seismicity in time. Moreover, for all declustered catalogues, the spatially inhomogeneous Poisson process did not fit properly the spatial distribution of earthquake epicentres. Hence, the assumption of stationary and homogeneous Poisson process, widely used in seismic hazard assessment, was not met by the investigated catalogue, independently from the adopted declustering method. Accounting for the spatiotemporal features of the background seismicity identified in this study is, therefore, a key element towards effective seismic hazard assessment and earthquake forecasting in Algeria and the surrounding area.
... Finally, we draw conclusions and present future research directions in section "Conclusion". Zaliapin et al. (2008;2013a;b;2016;2020) have shown the effectiveness of using nearest neighbor (single-parent) earthquake selection and the correlation metric described below. On the other hand, Yamagishi et al. (2020;2021b; a) used single-parent earthquake selection and the mean shift algorithm to experimentally demonstrate the limitations of this approach. ...
... According to Zaliapin et al. (2020), for the sake of discussing our obtained results, we define the attractive domain of given target earthquake i as L(i) = {j | i = arg min{n(i ′ , j) | i ≤ i ′ < j}} , i.e., L(i) is the set of child nodes for earthquake i when excluding any earthquake or event that occurred before it. Then we compare the numbers of first child nodes and elements in the attractive domain, i.e., |J 1 (i)| and |L(i)| , where |J 1 (i)| ≤ |L(i)| from these definitions. ...
Article
Full-text available
In this paper, we address the problem of earthquake declustering, and propose a k-nearest neighbors approach based on the selection of multiple-parent nodes with respect to each of the given earthquakes, which can be regarded as a natural extension of the conventional correlation-metric method based on the selection of a single-parent node. Based on this approach, we develop a centrality measure that exploits link weight assigned by a logarithmic-distance scheme and a technique of individually visualizing each set of child nodes with respect to given target earthquakes. For experimental evaluation, we used an earthquake catalog covering Japan and selected 24 earthquakes that caused considerable damage or casualties. We first show that our proposed centrality measure using a logarithmic-distance scheme can rank these 24 major earthquakes higher than four link-weighting schemes (i.e., uniform, magnitude, inverse-distance, and normalized-inverse-distance weighting) and conventional single-parent selection. We then show that unlike the conventional approach to simultaneously visualizing all the events in the catalog, our proposed technique can produce a naturally interpretable classification result for these 24 major earthquakes, by individually visualizing each set of the first to k-th child nodes with different colored markers plotted in the directly interpretable spatio and temporal metrics. As a consequence, we confirm that our approach based on multiple-parent selection is vital and promising.
... These fore-and aftershocks must be removed from the collection using a declustering technique to ensure a distribution of earthquake occurrences. This study employs gridded seismicity zone analysis, as performed by Leptokaropoulos et al. [30] and Zaliapin and Ben-Zion [31], and density-based clustering from Cesca [32]. There were 172 events in the collection, declustered from 520 occurrences, with 5 events of M W 6.0, 28 events of M W 5.0, and 138 events of less than M W 4.0. ...
Article
Full-text available
This paper presents the significance of a seismic hazard curve plot as a dynamic parameter in estimating earthquake-resistant structures. Various cases of structural damages in Malaysia are due to underestimating earthquake loadings since mostly buildings were designed without seismic loads. Sabah is classified as having low to moderate seismic activity due to a few active fault lines. Background point, area, and line sources are the three tectonic features that have impacted Sabah. Data on earthquakes from 1900 to 2021 have been collected by a number of earthquake data centers. The seismicity is based on a list of historical seismicities in the area, which stretches from latitudes 4 °S to 8 °N and longitudes 115 °E to 120 °E. The goal of this research is to develop a seismic hazard curve based on a conventional probabilistic seismic hazard analysis being examined for the maximum peak ground acceleration at 10% probability of exceedance as published in MSEN1998-1:2015. This study extended to 5% and 2% probability of exceedance combined with the seismic hazard curve by using Ranau as a case study. To calculate the expected ground motion recurrence, such as peak ground acceleration at the site, earthquake recurrence models were combined with selected ground motion models. A logic tree structure was used to combine simple quantities such as maximum magnitudes and the chosen ground motion models to describe epistemic uncertainty. The result demonstrates that peak ground acceleration values at the bedrock were estimated to be 0.16, 0.21, and 0.28 g of the total seismic hazard curve at 10%, 5%, and 2% PE in a 50-year return period, respectively. The seismic hazard study at a Ranau site basically depends on the seismicity of a region and the consequences of failure in the past. Thus, the results can be used as a basis for benchmarking design or evaluation decisions and for designing remedial measures for Sabah constructions to minimize structural failure.