Article

Self-organized Critical Phenomena

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Two years after their initial publication, Bak and Tang wrote a follow-up paper entitled 'Earthquakes as a Self-Organized Critical Phenomenon' (Bak and Tang, 1989). But what does their somewhat abstract model have to do with earthquakes? ...
... This is because we are in effect introducing a spatial correlation to the system causing it to produce a periodic or sinusoidal signature in its energy fluctuations that would not occur under local driving (defined below) conditions (e.g. Bak and Tang, 1989). ...
... In their seminal paper, they demonstrated through the use of a 'sand pile' numerical model (the BTW model here on) how a system with 'extended degrees of spatial freedom' can naturally evolve to a critical state with power-law statistics. This model was then proposed as 'a general mechanism leading to the power law distribution of earthquakes' (Bak and Tang, 1989). The BTW model was considered by Bak and Tang (1989) to have the essence of the Burridge and Knopoff 'spring block' model (Burridge and Knopoff, 1967) but greatly simplified in its dynamics. ...
Thesis
Energy released from the Earth’s crust in the form of earthquakes commonly follows a powerlaw gamma type probability distribution. This spontaneous organisation is in apparent contradiction to the second law of thermodynamics that states that a system should naturally evolve to a state of maximum disorder or entropy. However, developments in the field of modern thermodynamics suggest that some systems can undergo organisation locally, at the expense of increasing disorganisation (or entropy) globally through a process of entropy production. The primary aim of this thesis is to investigate self-organisation in the Earth’s seismogenic lithosphere as a driven, far-from-equilibrium, self-organising ‘dissipative structure’ in a very near critical steady-state and the underlying general mechanisms involved. The secondary aim is to test in more detail the applicability of the Bak, Tang and Wiesenfeld (BTW) model of Self-Organised Criticality (SOC) in describing Earth’s seismicity. This is done by: 1. Mathematical derivation of analytical solutions for system energy and entropy using the tools of equilibrium statistical mechanics; 2. The study of conservative and non-conservative versions of the BTW numerical model and 3. Analysis of temporal and spatial properties of earthquake data from the Harvard Centroid Moment Tensor catalogue and the Global Heat Flow Database. The modified gamma distribution predicts analytically that entropy S is related to the energy probability distribution scaling exponent B and the expectation of the logarithm of seismic energy hlnEi in the form of the gamma entropy equation S » BhlnEi. This solution is con- firmed for both numerical model results and real earthquake data. Phase diagrams of B vs. hlnEi suggest that the universality in B need not be maintained for a system to remain critical provided there is a corresponding change in hlnEi and S. The power-law systems examined are different from equilibrium systems since the critical points do not occur at global maximum entropy. For the dissipative BTW model at a steady-state, the externally radiated energy follows out-of-equilibrium power-law gamma type statistics, but, the internal energy has two icharacteristics that are indicative of equilibrium systems; a Gaussian type energy probability distribution and a Brownian noise power-spectrum for the internal energy fluctuations. This suggests an observer dependency in assessing criticality. The internal and external entropies calculated for the model are negatively correlated suggesting that driven systems self-organise at the expense of increasing entropy globally through a process of dissipation. A power-law dependency of mean radiated energy hEi on dissipation 1¡® is confirmed for a locally driven dissipative system in the form hEi » (1¡®)¡0:975. The BTW model shows spatial heterogeneity whilst maintaining universality in contradiction to previous assumptions. The quantitative analysis of real data reveals that earthquakes are more predictable spatially then temporally. Regionalisation using the Flinn-Engdahl classification shows that mid-ocean ridges are more organised (lower entropy) than subduction zones. A regional study of three different scaling exponents suggests that universality in earthquake scaling is violated, in contradiction to the original model of SOC. A model of self-organised sub-criticality (SOSC) is proposed as an alternative model for Earth seismicity. Overall, the results suggest that the tools of equilibrium thermodynamics can be applied to a steady-state far-from-equilibrium system such as the Earth’s seismogenic lithosphere, and that the resulting self-organisation occurs at the expense of maximising dissipation and hence entropy production.
... They proposed that CGMN-like states in nature are generated by self-tuning or self-organization. The terms "self-organized criticality," and "the border of chaos," are analogous [Bak and Tang, 1989;Bak and Chen, 1990]. The rubric "autopoiesis," from living systems theories, has been proposed as a metaphor for all such phenomena [Jantsch, 1980]. ...
... We have limited the analysis to steady state models, and these models appear to correspond well to concepts of critical self-organization [cf. Bak and Tang, 1989;Bak and Chen, 1990]. Cumulative tremor duration below about 5 km depth appears to be a good monitor of magma distributions. ...
... Models of critical self-organization recently applied by Bak and others to geophysical problems [e.g., Bak and Tang, 1989;Bak and Chen, 1990] resemble the model described here. There is no inconsistency between concepts of criticality and attractors of low dimensions. ...
Article
Full-text available
A hierarchical model of magma transport in Hawaii is developed from the seismic records of deep (30-60 km) and intermediate-depth (5-15 km) harmonic tremor between January 1, 1962, and December 31, 1983. A tremor model of magma transport is developed from mass balances of percolation that are proportional to tremor durations. It gives reasonable magma fractions and residence times for a vertical drift velocity of 4 km yr⁻¹ and yields patterns of intermittency that are in accord with singularity analyses of the 22-year time series record. It is suggested that spatiotemporal universality extends from small to large scales in Hawaiian and other magmatic systems. The apparent universal scaling of frequencies may be more than 15 decades in time (1 s to about 60 m.y.) and 10 decades in length (0.1 mm to 10³ km). -from Authors
... Copyright (2007 by the American Physical Society. Smalley et al., 1985;Kagan, 1989;Sornette and Sornette, 1989;Bak and Tang, 1989;Sornette and Sornette, 1999a;Kagan, 1992;Olam et al., 1992;Kagan, 1994;Sornette and Sammis, 1995;Saleur et al., 1996a;Bak, 1996;Jensen, 1998;Nature Debates, 1999;Hergarten, 2002;Sornette, 2002Sornette, , 2004Kagan, 2006]. The current debate on recurrence statistics is the latest tack in the evolving string of arguments. ...
... The abundance of power laws in earthquake seismology ignited a keen interest amongst statistical physicists to explain the spatio-temporal organization of earthquakes by the theory of critical phenomena and, in particular, by SOC. Sornette and Sornette [1989] and Bak and Tang [1989] first suggested that SOC may be relevant for earthquakes. The appeal of placing the study of earthquakes in the framework of critical phenomena may perhaps be summarized as follows. ...
Article
Full-text available
Usually, earthquakes develop after a strong main event. In literature they are defined as aftershocks and play a crucial role in the seismic sequence development: as a result, they should not be neglected. In this paper we analyzed several aftershock sequences triggered after a major earthquake, with the aimed at identifying, classifying and predicting the most energetic aftershocks. We developed some simple graphic and numeric methods that allowed us to analyze the development of the most energetic aftershock sequences and estimate their magnitude value. In particular, using a hierarchisation process related to the aftershocks sequence, we identified primary aftershocks of various orders triggered by the mainshock and secondary aftershocks of various orders triggered by the previous shock. Besides, by a graphic method, it was possible to estimate their magnitude. Through the study of the delay time and distance between the most energetic aftershocks and the mainshock, we found that the aftershocks occur within twenty-four hours after the mainshock and their distance remains within a range of hundreds of kilometers. To define the aftershocks sequence decay rate, we developed a sequence strength indicator (ISF), which uses the magnitude value and the daily number of seismic events. Moreover, in order to obtain additional information on the developmental state of the aftershocks sequence and on the magnitude values that may occur in the future, we used the Fibonacci levels. The analyses conducted on different aftershocks sequences, resulting from strong earthquakes occurred in various areas of the world over the last forty years, confirm the validity of our approach that can be useful for a short-medium term evaluation of the af-tershocks sequence as well as for a proper assessment of their magnitude value.
... Our system has extended degrees of freedom and we assume that it may evolve towards a self-organized critical state. It has been shown (Tang and Wiesenfeld, Bak, 1988) that such systems are characterized by spatial and temporal power-law scaling behaviour. The frequency spectrum is 1/f noise with a power-law-spectrum S(f) = /"". ...
... 6.4 Bak et al (1988) had established that for a self-organized criticality, the powerlaw behaviour applies to the spatioternporal phenomenon. Thus, for the spatial case, because the distribution function of slide sizes D(s) is related to the slide sizes (s) by: ...
Article
Self-organized criticality (SOC) is hereby proposed as a possible physical basis for ex-plaining observations in the temperature-dependence of the rates of biological membrane-associated events. The biomembrane undergoes a reversible, cooperative, thermotropic gel-to-liquid crystalline phase transition which is broad, and involves lateral phase sep-aration. The lateral phase separated (rather than the totally gel-, or the totally liquid crystalline-) membrane state has been observed to be the state in which vital membrane functions are facilitated. The membrane in this unique state is viewed, for our purposes here, as a dynamical, extended dissipative system with spatial and temporal degrees of freedom, exhibiting power law behaviour, typical of the self-organized critical state. Ex-periments are suggested for verifying this hypothesis.
... Instabilities take place as soon as a local threshold is reached, and a cooperative phenomenon develops, spreading over all scales. According to [1] and [2], the crust of the earth has been set up in a highly complex self-organized critical state in which the criticality manifests itself in many different geological phenomena with power-law fractal distribution and dynamics. As a specific example, seismicity patterns appear to be complex and chaotic, yet there is order in the complexity. ...
... Numerous evidence exists that the clustering of regional seismicity appears to have a fractal-like structure [3, 4]. Multifractal analysis has provided us with a deep insight into the chaotic nature of distributions and geometry associated with earthquake clustering phenomena ([2,5678910111213141516). The well known Gutenberg-Richter formula [17] also implies a power law relation between the energy release and the frequency of occurrence of earthquake. ...
Article
Full-text available
This paper shows how multifractal analysis can be used to characterize the spatial distribution of epicenters in the Zagros and Alborz-Kopeh Dagh regions of Iran. The main multifractal characteristics, D q , f(α q), τ q , α q and a set of multifractal parameters defined from the shape of the f(α q)-spectrum including the width of f(α q)-spectrum, non-uniformity factor, combined parameter, coefficients of steepness and asymmetry, and vertex of the τ(q)-spectrum have been determined. The results show that, in comparison with the Alborz-Kopeh Dagh region the epicentral distribution in the Zagros has a weak multifractal (i.e. less heterogeneous) structure. Although the f(α q) spectra of both regions were skewed, the directions of skewing were different. The two distinct multifractal distribution patterns in these regions reflect different underlying seismotectonic processes related to earthquake activity. The diffused seismicity with fewer larger earthquakes relative to smaller ones in the Zagros and relatively low level of discontinuous seismicity with sporadic occurrence of strong destructive events in the Alborz-Kopeh Dagh confirm our findings. The results further suggest that multifractal analysis provides us with a deep insight into the complex nature of distribution and geometry associated with earthquake-related phenomena that could not be discovered by any other means.
... Copyright (2007 by the American Physical Society. Smalley et al., 1985;Kagan, 1989;Sornette and Sornette, 1989;Bak and Tang, 1989;Sornette and Sornette, 1999a;Kagan, 1992;Olam et al., 1992;Kagan, 1994;Sornette and Sammis, 1995;Saleur et al., 1996a;Bak, 1996;Jensen, 1998;Nature Debates, 1999;Hergarten, 2002;Sornette, 2002Sornette, , 2004Kagan, 2006]. The current debate on recurrence statistics is the latest tack in the evolving string of arguments. ...
... The abundance of power laws in earthquake seismology ignited a keen interest amongst statistical physicists to explain the spatio-temporal organization of earthquakes by the theory of critical phenomena and, in particular, by SOC. Sornette and Sornette [1989] and Bak and Tang [1989] first suggested that SOC may be relevant for earthquakes. The appeal of placing the study of earthquakes in the framework of critical phenomena may perhaps be summarized as follows. ...
Article
Full-text available
The randomness of the occurrences of earthquakes, together with our limited ability to detect and measure earthquakes, combine to present challenges for the testing of scientific hypothesis about earthquakes. This dissertation examines implications of these challenges and presents methods for addressing them. In contrast to physical systems characterized by a dominating length scale, the relevant scales of earthquakes span many orders of magnitude. Our limited observations of the smallest of these scales, in the form of small, undetected earthquakes, severely impacts our ability to faithfully model observable seismicity because, as we show, small earthquakes contribute significantly to observed seismicity. Using the Epidemic-Type Aftershock Sequence model, a time-dependent model of triggered seismicity, we introduce a formalism that distinguishes between the detection threshold and a smaller size above which earthquakes may trigger others, and place constraints on its size. We derive equations that relate observed clustering parameters obtained from different thresholds. We show that parameters are biased and discuss the failure of the maximum likelihood estimator. As an example of the power of simulation-based null hypothesis testing, we investigate a recent claim of a scaling law in the distribution of the spatial distances between successive earthquakes. Motivated by the debate on the relevance of critical phenomena to earthquakes and by the suggested contradiction of aftershock zone scaling, we analyze other regions and generate synthetic data using a realistic model that explicitly includes mainshock rupture length scales. We show that the proposed law does not hold. Earthquake catalogs contain a wide variety of uncertainties. We quantify magnitude uncertainties and find they are more broadly distributed than a Gaussian distribution. We show their severe impact on short term forecasts by proving that the deviations of a noisy forecast from an exact forecast are power-law distributed in the tail. We further demonstrate that currently proposed consistency tests to evaluate forecasts reject noisy forecasts more often than expected at a given confidence limit. This is due to the assumed Poisson likelihood, which should be replaced by a model-specified distribution. Finally, we propose the framework of data assimilation as a vehicle for systematically accounting for uncertainties. We review the concept of sequential Bayesian data assimilation, the purpose of which is to estimate as best as possible a desired quantity using both the noisy observations and a short-term model forecast. Sequential Monte Carlo methods are identified as a set of flexible simulation-based techniques for estimating posterior distributions. We implement a particle filter for a lognormal renewal process with noisy occurrence times and present a Bayesian solution for estimating noisy marks in a general temporal point process.
... These avalanches can appear when either the effects of single, small-scale events accumulate or when many interactions sum in a complex fashion. For a self-organized system, therefore, the distribution of avalanches spans all sizes and both small and large effects will follow a power-law scaling (Bak et al., 1988a): ...
... Various size effects, boundary conditions, grain shape and delivery methods have received critical discussion (Bak and Chen, 199 1 ; Nagel, 1992). The principal findings centre on the influence of finite size effects of actual experiments (Kadanoff et al., 1989; Held et al., 1990; Evesque, 1991) when compared with periodic or infinite boundary conditions used in numerical simulations (Bak et al. 1988a). Using Biosphere 2 records for atmospheric carbon dioxide, an attractive scientific goal therefore is to test self-organized criticality as a theory governing closed ecosystems. ...
Article
Full-text available
A little understood question in climate and ecological modelling is when a system appropriately can be considered in statistical equilibrium or quasi-steady state. The answer bears on a host of central issues, including the ability of small perturbations to cause large catastrophes, the constant drift of unsettled systems, and the maximum amount of environmental control theoretically possible. Using Biosphere 2 records, the behaviour of carbon dioxide fluctuations was tested for correspondence with theories now known collectively as self-organized criticality. The signature of agreement with other large, composite systems, including forest fires, stock markets, and earthquakes, is a common frequency spectrum or power-law correlations. In this case, the large- and small-scale ends of the spectrum share a common driving force and consequently no single cut-off exists for excluding or ignoring small environmental changes. From the Biosphere 2 carbon dioxide data, the fluctuations in internal atmospheres vary in both small and large steps. The time fluctuations were examined as they varied over 2 years and over three orders of magnitude in fluctuation size, then binned into characteristic size classes. The statistics show a power-law scaling exponent of −1·3, compared with −1 for classical flicker noise (1/f spectrum) and −2·5 for analogous sand-pile experiments developed to test the predictions of a self-organized, critical system. For comparison with open ecosystems, the Byrd climatic record of global CO2 over the last 50 ka has a similar power-law relation but with −2·3 as the scaling exponent. For generalizing self-organized criticality, the design suggests that otherwise unrelated biological and physical models may share a common correlation between the frequency of small and large length-scales or equivalently exhibit temporal similarity laws. The results potentially have wide implications for environmental control in otherwise chaotic or difficult to predict ecological behaviour.
... Essentially, small avalanches occur often, moderate avalanches less frequently, and large avalanches rarely. Following this finding, several studies have adopted the sand pile template to model non-steady phenomena including forest fires [Albano, 1995; Drossel and Schwabl, 1992], earthquakes [Bak and Tang, 1989; Ito and Matsuzaki, 1990], mass extinction [Paczuski et al., 1996], air pollution [Shi and Liu, 2009], climate change [Liu et al., 2013] and intraflagellar transport dynamics [Ludington et al., 2013]. The simplicity and dynamics of the sand pile system motivated its adoption and modification as a model of CH 4 ebullition (Figure 5). ...
Thesis
Full-text available
Methane (CH4) is a greenhouse gas with a global warming potential much greater than carbon dioxide, and one of the major sources of naturally occurring CH4 are peatlands. Large amounts of CH4 can be transported from peat to the atmosphere through bubbles (ebullition). The exact controls of ebullition remain uncertain, but evidence suggests that physical processes related to gas transport and storage within the peat structure is important. Although these processes are key to replicating ebullition, no computer models of ebullition contain a detailed spatial representation of the peat structure. This thesis investigated the role of peat structure on CH4 ebullition using computer and physical models. A computer model of ebullition was developed and tested against physical models of ebullition. The computer model contained a spatial representation of peat and rules to transport gas through the peat structure. Physical models consisted of i.) air injected into a simple porous medium, and ii.) a separate physical model with peat. Following a pattern oriented modelling approach, gas storage, bubble size, and bubble release (ebullition) were three patterns used to test the computer model. Overall the computer model was able to replicate the patterns generated from the physical models and this demonstrated that the computer model was useful for modelling CH4 ebullition from peat. The results generated with the computer model confirmed our hypothesis: peat structure and subsequent gas storage were found to be important controls on ebullition timing location and quantity. From these results, we were able to make recommendations on sampling CH4 ebullition from peat in the field.
... It is a long-established assumption in geophysics that earthquakes are inherently unpredictable (Bak and Tang, 1989;Geller et al., 1997). As a result of such assumptions, using SWS to monitor stress-accumulation by changes in micro-crack geometry in order to "stressforecast" earthquakes, although apparently successful , is controversial and difficult to get accepted. ...
Article
Full-text available
Since the late 1960s - early 1970s, seismologists started studying the elastic properties of the Earth crust looking for signals from the Earth interior indicating that a large earthquake is coming. To be useful for prediction a signal needs to: 1) occur before most large earthquakes and 2) occur only before large earthquakes. Up to now, no one has ever found such a signal, but since the beginning of the search, seismologists developed theories that included variations of the elastic property of the Earth crust prior to the occurrence of a large earthquake. The most popular is the theory of the dilatancy: when a rock is subject to stress, the rock grains are shifted generating micro-cracks, thus the rock itself increases its volume. Inside the fractured rock, fluid saturation and pore pressure play an important role in earthquake nucleation, by modulating the effective stress. Thus, measuring the variations of wave speed and of anisotropic parameter in time can be highly informative on how the stress leading to a major fault failure builds up. In 1980s and 1990s such kind of research on earthquake precursors slowed down and the priority was given to seismic hazard and ground motions studies, which are very important since these are the basis for the building codes in many countries. Today, we have dense and sophisticated seismic networks to measure wave-field characteristics: we archive continuous waveform data recorded at three components broad-band seismometers, we almost routinely obtain highresolution earthquake locations. Therefore, we are ready to start to systematically look at seismic-wave propagation properties to possibly reveal short-term variations in the elastic properties of the Earth crust. One seismological quantity which, since the beginning, is recognized to be diagnostic of the level of fracturation and/or of the pore pressure in the rock, hence of its state of stress, is the ratio between the compressional (P-wave) and the shear (S-wave) seismic velocities: Vp/Vs. Variations of this ratio have been recently observed and measured during the preparatory phase of a major earthquake. In active fault areas and volcanoes, tectonic stress variation influences fracture field orientation and fluid migration processes, whose evolution with time can be monitored through the measurement of the anisotropic parameters. Through the study of S-waves anisotropy it is therefore potentially possible to measure the presence, migration and state of the fluid in the rock traveled by seismic waves, thus providing a valuable route to understand the seismogenic phenomena and their precursors. On the other hand, only in the very recent times with the availability of the continuous seismic records, many authors have shown how it is possible to estimate the relative variations in the wave speed through the analysis of the crosscorrelation of the ambient seismic noise. In this paper we first analyze in detail these two seismological methods: shear wave splitting and seismic noise cross-correlation, presenting a short historical review, their theoretical bases, the problems, learning, limitations and perspectives. We, then, compare the main results in terms of temporal trends of the observables retrieved applying both methods to the Pollino area (southern Apennines, Italy) case study.
... However, the importance of characteristic earthquakes is generally accepted as will be discussed in Section 4.24.2.5. An alternative approach has been suggested that the observed scale-invariant distribution of earthquake magnitudes is due to the self-organized dynamics of seismogenic zones (Bak and Tang, 1989; Turcotte, 1999b). In this approach, the internal nonlinear dynamics of the upper part of the crust is responsible for the emergence of the scale-invariant behavior. ...
Chapter
Full-text available
Earthquakes are clearly a complex phenomenon. Yet within this complexity, there are several universally valid scaling laws. These include the Gutenberg–Richter frequency–magnitude scaling, the Omori law for the decay of aftershock activity, and Båth's law relating the magnitude of the largest aftershock to the magnitude of the mainshock. Other possible universal scaling laws include power-law accelerated moment release prior to large earthquakes, a Weibull distribution of recurrence times between characteristic earthquakes, and a nonhomogeneous Poisson distribution of interoccurrence times between aftershocks. The validity of these scaling laws is evidence that earthquakes (seismicity) exhibit self-organized complexity. The relationships of such concepts as fractality, deterministic chaos, and self-organized criticality to earthquakes can be used to illustrate and quantify the complex behavior of earthquakes. A variety of models that exhibit self-organized complexity have been used to describe the observed patterns of seismicity. Simple cellular automaton models such as the slider-block model reproduce some important statistical aspects of seismicity and capture their basic physics. Damage mechanics models can also capture important features of seismicity. Simulation-based approaches to distributed seismicity are a promising path toward the formulation of physically plausible numerical models, which can reproduce realistic spatially and temporarily distributed synthetic earthquake catalogs. The objective of this chapter is to summarize the most important aspects of the occurrence of earthquakes and discuss them from the complexity theory point of view.
... Such a representation is of practical importance and predictive value since it directly provides the expected frequency of occurrence of impulses at the critical (I i /I cr ) or multiples of it levels, which are of interest for particle entrainment. Power law models are attractive since they have the ability to describe a wide range of scale invariant phenomena in Earth sciences [Schroeder, 1991;Turcotte, 1997], from the occurrence of rare natural hazards such as earthquakes [Bak and Tang, 1989;Rundle et al., 1996] to the selfsimilarity of channel networks and corresponding energy and mass distribution [Rodriguez-Iturbe et al., 1992;Rogriguez-Iturbe and Rinaldo, 1997]. Their superiority compared to more sophisticated models has been also demonstrated for the prediction of bedload transport rates [Barry et al., 2004]. ...
... The isolines of higher energy flux density of seismotectonic deformations extend along the rift features from southh west to northeast in the Baikal region and this allows one to treat the BRZ lithosphere as an extended zone of increased, inhomogeneous energy release for endogee nous geotectonic and tectonophysical processes. The observed distribution of the energy flux density of seiss motectonic deformations is consistent with the selff organized criticality model [Bak et al., 1988; Bak and Tang, 1989]; according to this model, almost any spaa tially extended dynamic system arrives at a stationary state as it develops. The main properties of the dynamic system for seismicity are formed by the geodynamic regime of the BRZ lithosphere, by the wellldeveloped fault system, and by the topology of stress and strain in the earth; this is in part reflected also in the maxima of the energy flux density of seismotectonic deformations with the RAS areas, which drive and control the evoluu tionary processes in the lithosphere of the Baikal region [Klyuchevskii, 2010 [Klyuchevskii, , 2011a [Klyuchevskii, , 2011b. ...
Article
Full-text available
Abstract—Estimation and comparison of the energy of seismotectonic deformations in the lithosphere of the Baikal Rift Zone (BRZ) based on observations of large (M ≥ 6) earthquakes for the period of instrumental recording (1950–2002), for a historical period lasting 210 years (1740–1949), and inferred from palaeo�seis� mological materials for the past 2000 years, all indicate that the hypothesis of a stationary seismic process is appropriate for the region. The locations of maxima of the density of seismotectonic strain energy released during the time intervals under investigation show that most of the failures in the lithosphere occurred approximately in the same areas, which may be interpreted as stress concentrators. The isolines of increased density for the energy of seismotectonic deformations align themselves along the rift features from southwest to northeast in the Baikal region and this allows one to treat the BRZ lithosphere as an extended zone of enhanced, inhomogeneous, energy release of endogenous geotectonic processes. We assessed the power of the seismotectonic processes that reflect the release of endogenous energy through earthquakes. Identification of areas with deficits in the energy of seismotectonic deformations (“energy gaps”) is an important step toward long�term solution of seismic�safety problems for the Baikal region. DOI: 10.1134/S0742046313030032
... It is a long-established assumption in geophysics that earthquakes are inherently unpredictable (Bak & Tang 1989;Geller et al. 1997). As a result of such assumptions, using shear-wave splitting to monitor stress-accumulation by changes in micro-crack geometry in order to "stress-forecast" earthquakes, although apparently successful (Crampin et al. 1999, is controversial and difficult to get accepted. ...
Article
Full-text available
Since the late the late '60s-early '70s era seismologists started developed theories that included variations of the elastic property of the Earth crust and the state of stress and its evolution crust prior to the occurrence of a large earthquake. Among the others the theory of the dilatancy (Scholz et al., 1973): when a rock is subject to stress, the rock grains are shifted generating micro-cracks, thus the rock itself increases its volume. Inside the fractured rock, fluid saturation and pore pressure play an important role in earthquake nucleation, by modulating the effective stress. Thus measuring the variations of wave speed and of anisotropic parameter in time can be highly informative on how the stress leading to a major fault failure builds up. In 80s and 90s such kind of research on earthquake precursor slowed down and the priority was given to seismic hazard and ground motions studies, which are very important since these are the basis for the building codes in many countries. Today we have dense and sophisticated seismic networks to measure wave-fields characteristics: we archive continuous waveform data recorded at three components broad-band seismometers, we almost routinely obtain high resolution earthquake locations. Therefore we are ready to start to systematically look at seismic-wave propagation properties to possibly reveal short-term variations in the elastic properties of the Earth crust. One seismological quantity which, since the '70s, is recognized to be diagnostic of the level of fracturation and/or of the pore pressure in the rock, hence of its state of stress, is the ratio between the compressional (P-wave) and the shear (S-wave) seismic velocities, the Vp/Vs (Nur, 1972; Kisslinger and Engdahl, 1973). Variations of this ratio have been recently observed and measured during the preparatory phase of a major earthquake (Lucente et al. 2010). In active fault areas and volcanoes, tectonic stress variation influences fracture field orientation and fluid migration processes, whose evolution with time can be monitored through the measurement of the anisotropic parameters (Miller and Savage, 2001; Piccinini et al., 2006). Through the study of S waves anisotropy it is therefore potentially possible to measure the presence, migration and state of the fluid in the rock traveled by seismic waves, thus providing a valuable route to understanding the seismogenic phenomena and their precursors (Crampin & Gao, 2010). In terms of determination of Earth crust elastic properties, recent studies (Brenguier et al., 2008; Chen et al., 2010; Zaccarelli et al., 2011) have shown how it is possible to estimate the relative variations in the wave speed through the analysis of the crosscorrelation of ambient seismic noise. In this paper we analyze in detail two seismological methods dealing with shear wave splitting and seismic noise cross correlation: a short historical review, their theoretical bases, the problems, learnings, limitations and perspectives. Moreover we discuss the results of these methods already applied on the data recorded in the L'Aquila region, before and after the destructive earthquake of April 6th 2009, represent their self an interesting case study.
... These results have strengthened the idea that the earth's crust is a SOC system. This fact has produced a pessimistic atmosphere around the perspectives of the problem of earthquake prediction (Geller et al. 1997, Bak and Tang 1989), because the SOC system is intrinsically unpredictable in the sense that it is not possible to calculate the size and occurrence time of the next event (Bak 1996). In the present paper, we show several interesting properties of a spring-block SOC model which are concomitant with properties of real seismicity. ...
Article
Full-text available
The spring-block model proposed by Olami, Feder and Christensen (OFC) has several properties that are similar to those observed in real seismicity. In this paper we propose a modification of the original model in order to take into account that in a real fault there are several regions with different properties (non-homogeneity). We define regions in the network that is reminiscent of the real seismic fault, with different sizes and elastic parameter values. We obtain the Gutenberg-Richter law for the synthetic earthquake distributions of magnitude and the stair-shaped plots for the cumulative seismicity. Again, as in the OFC-homogeneous case, we obtain the stability for the cumulative seismicity stair-shaped graphs in the long-term situation; this means that the straight line slopes that are superior bounds of the staircases have a behavior akin to the homogeneous case. We show that with this non-homogeneous OFC model it is possible to include the asperity concept to describe high-stress zones in the fault.
... The data come from the CNSS Earthquake Catalog (http://quake.geo.berkeley.edu/cnss/maps/cnss-map.html). a the western hemisphere, b the US western coast c California and Nevada lutional and self-similar system is characteristic for many critical phenomena Jerky Motion in Slowly Driven Magnetic and Earthquake Fault Systems, Physics of [3,9,16,74]. The worldwide fault network has a fractal structure (or multifractal) [22,27,79]. ...
... Region I is a steep slope for the smallest cracks, Region II is a power law region with slope r = 1 for intermediate cracks, and Region III is an exponential tail that can be characterized by a finite correlation length. If the local hardening rule is absent, then models show only Regions II and III (e.g., Bak and Tang, 1989). The transition between Regions I and II has been seen in ceramic 15.0 10.0 et al., 1994) Increasing p implies greater local hardening. ...
Article
This chapter discusses the hydromechanical behavior of fractured rocks. The hydromechanical behavior of rock fractures can be studied on the scale of a single fracture and also on the scale of a fractured rock mass that contains many fractures. The behavior of single fractures must be thoroughly understood before the behavior of fractured rock masses can be understood. If a fracture is located in a rock mass with a given ambient state of stress, the traction acting across the fracture plane can be resolved into a normal component and a shear component. The normal traction gives rise to a normal closure of the fracture. The shear component of the traction causes the two rock faces to undergo a relative deformation parallel to the nominal fracture plane, referred to as “shear deformation.” A tangential traction also typically causes the mean aperture to increase, in which case the fracture dilates. Dilation arises because the asperities of one fracture surface must by necessity ride up to move past those of the other surfaces; hence, shear deformation of a fracture is inherently a coupled process in which both normal and shear displacement occur.
... Recent ideas of self-organized criticality (e.g., Bak and Tang, 1989;Cowie et al., 1993) help to explain this result. If the Earth behaves in the way these authors suggest, all parts of the brittle crust are at the point of failure and, as a result of long-range elastic correlations, an earthquake can be followed by a nearby or a distant event. ...
Article
Full-text available
To understand whether the 1992 M=7.4 Landers earthquake changed the proximity to failure on the San Andreas fault system, we examine the general problem of how one earthquake might trigger another. The tendency of rocks to fail in a brittle manner is thought to be a function of both shear and confining stresses, commonly formulated as the Coulomb failure criterion. Here we explore how changes in Coulomb conditions associated with one or more earthquakes may trigger subsequent events. We first consider a Coulomb criterion appropriate for the production of aftershocks, where faults most likely to slip are those optimally orientated for failure as a result of the prevailing regional stress field and the stress change caused by the main shock. We find that the distribution of aftershocks for the Landers earthquake, as well as for several other moderate events in its vicinity, can be explained by the Coulomb criterion: aftershocks are abundant where the Coulomb stress on optimally orientated faults rose by more than one-half bar, and aftershocks are sparse where the Coulomb stress dropped by a similar amount. Further, we find that several moderate shocks raised the stress at the future Landers epicenter and along much of the Landers rupture zone by about a bar, advancing the Landers shock by 1-3 centuries. The Landers rupture, in turn, raised the stress at site of the future M=6.5 Big Bear aftershock site by 3 bars. The Coulomb stress change on a specified fault is independent of regional stress but depends on the fault geometry, sense of slip, and the coefficient of friction. We use this method to resolve stress changes on the San Andreas and San Jacinto faults imposed by the Landers sequence. Together the Landers and Big Bear earthquakes raised the stress along the San Bernardino segment of the southern San Andreas fault by 2-6 bars, hastening the next great earthquake there by about a decade.
... A second group of simplifying assumptions are based on the idea that a complex system may be modelled by assuming simple rules about how, for example, stress is dissipated when a small section of a fault plane fails (e.g. Bak, Tang & Weisenfeld 1987; Bak & Tang 1989; Sornette & Sornette 1989; Brown, Scholz & Rundle 1991). They employ cellular automata (Wolfram 1985) with as many as 8000 independent blocks (Chen, Bak & Obukhov 1991) and have reproduced realistic sequences of seismic events which obey the Gutenberg-Richter scaling relation (Gutenberg & Richter 1956). ...
Article
Full-text available
Models for earthquake generation employing high-dimensional cellular automata reproduce realistic event sequences which conform to the Gutenberg-Richter relation. These models allow individual degrees of freedom to many small areal increments of a fault face. Simple, low-dimensional spring-block models with as few as two blocks have been shown, however, to explain neatly the coupling between adjacent segments of transform faults which has been observed in the stratigraphic record. The comparative usefulness of these dynamically distinct models may be assessed by elucidating the number of degrees of freedom which are required to reproduce the complexity of a real earthquake catalogue. In this study the dimensionality of earthquake generating mechanisms is assessed by non-linear predictability analysis (from the literature of non-linear dynamical systems theory) on phase portraits constructed from time and magnitude data from a standard earthquake catalogue, a high-dimensional cellular automaton and a low-dimensional double-block model. The results show that complete earthquake populations are best described by complex, high-dimensional dynamics. Considerations of the structure of the frequency-magnitude distributions of real earthquake populations and the evidence for coupling between active fault sections across seismic gaps indicate, however, that such dynamics may not explain many important features of natural seismicity. A hierarchical model is therefore proposed. In this model there is an explicit change in the generating dynamics with magnitude; a high-dimensional cellular automaton being responsible for the production of low-energy events and a low-dimensional, double-block mechanism representing the dynamics of events in which an entire fault face slips. The magnitude dependence of the dimensionality and b value, which is predicted from this model, is observed in a real earthquake catalogue. Coupling across a seismic gap is also a fundamental consequence of the model. These dynamics may be appropriate for modelling seismicity on a section of a transform fault which includes more than one active segment.
... (1) provides a magnitude-frequency relationship, which can be further considered as playing the same role as the Gutenberg-Richter law in earthquake (Turcotte, 1997) that has been recognized as SOC (Bak and Tang, 1989;Bhattacharya and Manna, 2007). ...
Article
Debris flows (especially those of high-density or high viscosity) usually appear in the form of surge sequence. Each debris flow event consists of tens or hundreds of surges. A debris flow is thus equal to a surge series which is expected to feature the intrinsic properties. Statistics of observation data indicates that different events have the similar distributions in parameters such as velocity, flow depth, discharge, runoff, and time interval, which can be considered as the signs of an underlying dynamics. But there are currently difficulties in extracting the dynamics from the surge series. This paper is concerned with the temporal variations of surge series based on observation data in the last forty years in Jiangjia Gully (JJG). The temporal process of a surge series is characterized by the accumulative curve of the interval time. A surge series is found to be dominated by the peak discharge in that the peak discharge determines the exponent of the distribution. Moreover, the average discharge evolves with surge progress and finally decays in a power-law form. It follows that a surge series behaves as a whole which requires a unified dynamical framework to encompass all the appearances.
... It is a long-established assumption in geophysics that earthquakes are inherently unpredictable (Bak & Tang, 1989; Geller et al. 1997). As a result of such assumptions, using shear-wave splitting (seismic birefringence) to monitor stressaccumulation by changes in microcrack geometry in order to 'stress-forecast' earthquakes, although apparently successful (Crampin et al., 1999, 2008), is controversial and difficult to get accepted. ...
Article
Full-text available
In 1997, Geller et al. wrote Earthquakes Cannot Be Predicted because scale invariance is ubiquitous in self-organized critical systems, and the Earth is in a state of self-organized criticality where small earthquakes have some probability of cascading into a large event. Physically however, large earthquakes can only occur if there is sufficient stress-energy available for release by the specific earthquake magnitude. This stress dependence can be exploited for stress-forecasting by using shear-wave splitting to monitor stress-accumulation in the rock mass surrounding impending earthquakes. The technique is arguably successful but, because of the assumed unpredictability, requires explicit justification before it can be generally accepted. Avalanches are also phenomena with self-organised criticality. Recent experimental observations of avalanches in two-dimensional piles of spherical beads show that natural physical phenomena with self-organised criticality, such as avalanches, and earthquakes, can be predicted. The key to predicting both earthquakes and avalanches is monitoring the matrix material, not monitoring impending source zones.
... 4 The two classic references are Bak, Tang, & Wiesenfeld (1987, 1988a. Introductions may be found in Bak (1997); Bak, Tang, & Wiesenfeld (1988b); Buchanan (2000); and Paczuski & Bak (1999). More advanced presentations of the subject matter are in Dickman et al. (2000); Creutz (1997); and Blanchard et al. (2000), to mention just a few. a local motion whereby the sand rearranges itself slightly. ...
Article
The last decade and a half has seen an ardent development of self-organised criticality (SOC), a new approach to complex systems, which has become important in many domains of natural as well as social science, such as geology, biology, astronomy, and economics, to mention just a few. This has led many to adopt a generalist stance towards SOC, which is now repeatedly claimed to be a universal theory of complex behaviour. The aim of this paper is twofold. First, I provide a brief and non-technical introduction to SOC. Second, I critically discuss the various bold claims that have been made in connection with it. Throughout, I will adopt a rather sober attitude and argue that some people have been too readily carried away by fancy contentions. My overall conclusion will be that none of these bold claims can be maintained. Nevertheless, stripped of exaggerated expectations and daring assertions, many SOC models are interesting vehicles for promising scientific research.
... An interesting application of this problem is the sand piles problem, which has been investigated in many works in physics and combinatorics [1,3,5,8,12]. The core of this problem is to study a model of the sand piles, corresponding to the partitions of a certain integer, and the possible moves to transform one sand pile to another. ...
Article
In this paper, we study the dynamics of sand grains falling in sand piles. Usually sand piles are characterized by a decreasing integer partition and grain moves are described in terms of transitions between such partitions. We study here four main transition rules. The worst classical one, introduced by Brylawski (Discrete Math. 6 (1973) 201) induces a lattice structure LB(n) (called dominance ordering) between decreasing partitions of a given integer n. We prove that a more restrictive transition rule, called SPM rule, induces a natural partition of LB(n) in suborders, each one is associated to a fixed point for the SPM rule. In the second part, we extend the SPM rule in a natural way and obtain a model called Linear Chip Firing Game (Theoret. Comput. Sci. 115 (1993) 321). We prove that this new model has interesting properties: the induced order is a lattice, a natural greedoid can be associated to the model and it also defines a strongly convergent game. In the last section, we generalize the SPM rule in another way and obtain other lattice structure parametrized by some θ, denoted by L(n,θ), which form a decreasing sequence of lattices when θ varies in [−n+2,n]. For each θ, we characterize the fixed point of L(n,θ) and give the value of its maximal sized chain's length. We also note that L(n,−n+2) is the lattice of all compositions of n.
... Region I is a steep slope for the smallest cracks, region II is a power law region with slope τ = 1 for intermediate cracks, and region III is an exponential tail that can be characterized by a finite correlation length. If the local hardening rule is absent, then models show only regions II and III (e.g., Bak and Tang, 1989). The transition between regions I and II has been seen in ceramic materials (Henderson 2nd December 2003 16:37 Elsevier/MFS mfs et al., 1994.) ...
Article
Full-text available
Physical processes of earthquake generation may be divided into three different stages, such as tectonic loading, quasi-static nucleation, and subsequent dynamic rupture propagation. The basic equations governing the earthquake generation process are; the equation of motion in elastodynamics that relates slip motion on a fault surface with deformation of the surrounding elastic media, the fault constitutive law that prescribes the relation between shear stress and fault slip (and/or slip velocity) on the fault surface during earthquake rupture, and the loading function that gives the increase rate of eacternal shear stress induced by relative plate motion. Recent development in physics of earthquake generation enables us to simulate the entire process of earthquake generation by solving these nonlinear coupled equations. For long-term prediction of earthquake occurrence the detection of crustal movements associated with tectonic loading is important. For short-term prediction the detection of precursory phenomena associated with rupture nucleation is important. In either case it is necessary to establish an interactive forecast system based on theoretical simulation and continuous monitoring of earthquake generation processes.
Article
Full-text available
In order to reach Europe ׳ s 2020 and 2050 targets in terms of greenhouse gas emissions, geothermal resources will have to contribute substantially to meeting carbon-free energy needs. However, public opinion may prevent future large-scale application of deep geothermal power plants, because induced seismicity is often perceived as an unsolicited and uncontrollable side effect of geothermal development. In the last decade, significant advances were made in the development of models to forecast induced seismicity, which are either based on catalogues of induced seismicity, on the underlying physical processes, or on a hybrid philosophy. In this paper, we provide a comprehensive overview of the existing approaches applied to geothermal contexts. This overview will outline the advantages and drawbacks of the different approaches, identify the gaps in our understanding, and describe the needs for geothermal observations. Most of the forecasting approaches focus on the stimulation phase of enhanced geothermal systems which are most prone to generate seismic events. Besides the statistical models suited for real-time applications during reservoir stimulation, the physics-based models have the advantage of considering sub-surface characteristics and estimating the impact of fluid circulation on the reservoir. Hence, to mitigate induced seismicity during major hydraulic stimulations, application of hybrid methods in a decision support system seems the best available solution. So far, however, little attention has been paid to geochemical effects on the failure process and to production periods. Quantitative modelling of induced seismicity still is a challenging and complex matter. Appropriate resources remain to be invested for the scientific community to continue its research and development efforts to successfully forecast induced seismicity in geothermal fields. This is a prerequisite for making this renewable energy resource sustainable and accessible worldwide.
Conference Paper
Full-text available
Constraining the recurrence of mega-thrust earthquakes is genuinely important for hazard assessment and mitigation. The prevailing approach to model such events relies on subduction zone segmentation and quasi-periodic recurrence due to constant tectonic loading. Here we analyze earthquakes recorded along a 1,000-km-long section of the subducting Pacific Plate beneath Japan since 1998. We find that the relative frequency of small to large events varies spatially, closely mirroring the large-scale tectonic regimes, and suggesting a laterally unsegmented mega-thrust interface. Starting some years before it broke, the Tohoku source region is imaged as a region of high stress concentration. Following the 2011 M9 earthquake, the size distribution changes significantly and most dramatic in the areas of highest slip. However, we discover that it returns within just a few years to its longer-term characteristics as observed prior to the mega-thrust event. This indicates a rapid recovery of stress and implies that such large earthquakes may not have a characteristic location, size or recurrence interval, and might therefore occur more randomly distributed in time.
Chapter
Recent experimental results have evidenced that neuronal systems in vitro and in vivo exhibit a novel form of spontaneous activity, neuronal avalanches. These are bursts of firing neurons without a characteristic scale, whose distributions follow a robust scaling behavior. These observations have stimulated a number of theoretical studies questioning if and to what extent the brain can be considered a system acting near a critical point. We review results obtained by a novel modeling approach inspired in statistical mechanics and taking into account the main physiological features of neurons, which confirms that the system exhibits self-organized critical behavior, in agreement with experimental observations on spontaneous activity. Moreover, this approach provides interesting insights in the complex temporal organization of neuronal response. In particular, correlations play a crucial role in the system's activity, which self-regulates, oscillating between phases of high and low activity, in order to realize a balance between excitation and inhibition. This novel modeling approach not only reproduces spontaneous activity but also “evoked behavior”, i.e., the network is able to learn very complex rules, which usually require special adaptations in traditional models. The learning dynamics exhibits universal scaling and indicates that systems with slower synaptic adaptations give better performances, as observed in fMRI measurements on human brains.
Chapter
Full-text available
The seismicity recorded at El Chichón Volcano shows significant differences before and after the 1982 eruptive episodes. The analysis of the seismicity was performed using two well-known laws in seismology: the Gutenberg-Richter law, which describes the frequency-magnitude distribution of seismicity and the Omori law, which applies to the temporal decay of the number of earthquakes . Results of the analysis suggest that large quantities of fluids (hydrothermal fluids, water, and/or magma) were involved in the physical processes that generated the precursory seismicity one month before the onset of the eruption; only shallow (2–8 km depth) hybrid earthquakes were located and long-period and volcanic tremors were recorded. The volcanic signature of the seismicity is better described by two slopes instead of only one as is usually the case for tectonic events. Furthermore, the seismicity did not follow the Omori law. In contrast, for one month after the last eruptive phases of the 1982 eruptive events, the seismicity showed a more ‘tectonic’ signature, as evidenced by the occurrence of five large earthquakes (magnitudes ~3.8) on 4 April 1982. These followed both the Gutenberg-Richter law (with a single slope) and the Omori law. The tectonic signature is confirmed further by the single-slope observed in the distribution of the seismicity recorded during the period 1985–1990. Such behavior suggests the absence of abundant fluid at depth after the last plinian event of 4 April 1982, with a slow return to a regular tectonic response of the volcanic system. Seismic analysis of the 1982 eruptive sequence illustrates clear volcano-tectonic feedback interactions.
Article
Results from several seismic methods allow us to sketch the deep structure of Etna and its Ionian margin. Under Etna a volume of high velocity material is found in a structurally high position; the emplacement of this suggests spreading of the surrounding medium. Just offshore, down-to-the-east normal faults penetrate through the upper crust. The deeper crustal structure beneath appears upwarped from the basin towards Etna. Juxtaposed with the crust of Sicily, a thinner crust reaches from the Ionian Basin under Etna, and the ...
Article
A large 3-dimension cellular automation model with 200 200 10 meshes has been designed. By using it 94304 "earthquakes" have occurred. The result shows that the range of scale invariance for small and large "earthquakes" are different; in the near time and in a closing space, large "earthquakes" often respond to each other.
Article
The fractal distribution is the best statistical model for the size-frequency distributions that result from some lithic reduction processes. Fractals are a large class of complex, self-similar sets that can be described using power-law relations. Fractal statistical distributions are characterized by an exponent, D, called the fractal dimension. I show how to determine whether the size-frequency distribution of a sample of debitage is fractal by plotting the power-law relation on a log-log graph. I also show how to estimate the fractal dimension for any particular distribution. Using debitage size data from experimental replications of lithic tools, I demonstrate a fundamental relationship between the fractal dimension and stage of reduction. I also present archaeological case studies that illustrate the simplicity and utility of the method.
Article
This paper employs insights from Complex Systems literature to develop a compu-tational model of endogenous strategic network formation. Artificial Adaptive Agents (AAAs), implemented as finite state automata, play a modified two-player Iterated Prisoner's Dilemma game with an option to further develop the interaction space as part of their strategy. Several insights result from this relatively minor modification: first, I find that network formation is a necessary condition for cooperation to be sustainable but that both the frequency of interaction and the degree to which edge formation impacts agent mixing are both necessary conditions for cooperative net-works. Second, within the FSA-modified IPD frame-work, a rich ecology of agents and network topologies is observed, with consequent payoff symmetry and network 'purity' seen to be further contributors to robust cooperative networks. Third, the dynamics of the strategic system under network formation show that initially simple dynamics with small interaction length between agents gives way to complex, a-periodic dynam-ics when interaction lengths are increased by a single step. Subsequent analysis of the developing network topology shows scaling behaviour in both time and space, indicat-ing the attainment of a self-organised critical state, and thus apparently driving the complex dynamics of the overall strategic system.
Article
Nonlinear Dynamics in Heart Rate. Introduction: The term chaos is used to describe erratic or apparently random time-dependent behavior in deterministic systems. It has been suggested that the variability observed in the normal heart rate may be due to chaos, hut this question has not been settled. Methods and Results: Heart rate variability was assessed by recordings of consecutive RR intervals in ten healthy subjects using ambulatory ECG. All recordings were performed with the subjects at rest in the supine position. To test for the presence of nonlinearities und/or chaotic dynamics, ten surrogate time series were constructed from each experimental dataset. The surrogate data were tailored to have the same linear dynamics and the same amplitude distribution as the original data. Experimental and surrogate data were then compared using various nonlinear measures. Power spectral analysis of the RR intervals showed a 1/f pattern. The correlation dimension differed only slightly between the experimental and the surrogate data, indicating that linear correlations, and not a “strange” attractor, are the major determinants of the calculated correlation dimension. A test for nonlinear predictability showed coherent nonlinear dynamic structure in the experimental data, but the prediction error as a function in the prediction length increased at a slower rate than characteristic of a low-dimensional chaotic system. Conclusion: There is no evidence for low-dimensional chaos in the time series of RR intervals from healthy human subjects. However, nonlinear determinism is present in the data, and various mechanisms that could generate such determinism are discussed.
Article
A model has been developed to simulate the statistical and mechanical nature of rupture on a heterogeneous strike-slip fault. The model is based on the progressive failure of circular asperities of varying sizes and strengths along a fault plane subjected to a constant far-field shear displacement rate. The basis of the model is a deformation and stress intensity factory solution for a single circular asperity under a unidirectional shear stress. The individual asperities are unified through the fault stiffness and the far-field stress and displacement. During fault deformation asperities can fail and reheal, resulting in changes in the local stresses in the asperities, stress drops, and changes in the stiffness of the fault. Depending on how the stress is redistributed following asperity failure and on the strenghts of the neighboring asperities an earthquake event can be the failure of one or more asperities. Following an earthquake event seismic source parameters such as the stress drop, energy change, and moment magnitude are calculated. Results from the model show a very realistic pattern of earthquake rupture, with reasonable source parameters, the proper magnitude-frequency behavior, and the development of characteristic earthquakes. Also the progression ofb-values in the model gives some insight into the phenomenon of self-organized criticality.
Article
Full-text available
Rank-ordering analysis is applied to the intertimes between seismic events recorded in the Apennine belt between 40–42 N and 14–16 E from the 15th century onwards. It shows a power law capable of governing the intertimes between 1529 and 368months and another power law which approximates a random simulation, for the intertimes shorter than 368months. Only the first power law allows the computation of the return period of major events. Earthquakes with the same energy that are aligned according to different power laws imply the presence of two different populations, indicating, in turn, that the physics of seismic phenomena in the region examined is not straightforward, that the stress is probably not unidirectional and that it acts on a non-isotropic medium. The most probable estimated intertime value for the next event is found to be equal to 6020years.
Article
Full-text available
Aftershock statistics provide a wealth of data that can be used to better understand earthquake physics. Aftershocks satisfy scale-invariant Gutenberg–Richter (GR) frequency–magnitude statistics. They also satisfy Omori’s law for power-law seismicity rate decay and Båth’s law for maximum-magnitude scaling. The branching aftershock sequence (BASS) model, which is the scale-invariant limit of the epidemic-type aftershock sequence model (ETAS), uses these scaling laws to generate synthetic aftershock sequences. One objective of this paper is to show that the branching process in these models satisfies Tokunaga branching statistics. Tokunaga branching statistics were originally developed for drainage networks and have been subsequently shown to be valid in many other applications associated with complex phenomena. Specifically, these are characteristic of a universality class in statistical physics associated with diffusion-limited aggregation. We first present a deterministic version of the BASS model and show that it satisfies the Tokunaga side-branching statistics. We then show that a fully stochastic BASS simulation gives similar results. We also study foreshock statistics using our BASS simulations. We show that the frequency–magnitude statistics in BASS simulations scale as the exponential of the magnitude difference between the mainshock and the foreshock, inverse GR scaling. We also show that the rate of foreshock occurrence in BASS simulations decays inversely with the time difference between foreshock and mainshock, an inverse Omori scaling. Both inverse scaling laws have been previously introduced empirically to explain observed foreshock statistics. Observations have demonstrated both of these scaling relations to be valid, consistent with our simulations. ETAS simulations, in general, do not generate Båth’s law and do not generate inverse GR scaling. KeywordsAftershocks–foreshocks–ETAS model–BASS model–Tokunaga networks
Article
Observations of seismicity and ground control problems in the Sudbury mining camp have shown that late-stage (young) sub-vertical strike-slip faults are sensitive to small mining-induced stress changes. The strength-limited nature of stress measurements made in the region indicates that these structures are in a state of marginal stability. Numerical continuum models are developed to analyze the behavior of such structures. In the models, shear strain localizations (faults) evolve such that there is close interaction between the fault system, stresses, and boundary deformation. Fault slip activity in these systems is naturally sporadic and reproduces the commonly observed Gutenberg-Richter magnitude frequency relation. It is shown that a relatively minor disturbance to such a system can trigger significant seismicity remote from the source of the disturbance, a behavior which cannot be explained by conventional numerical stress analysis methodologies. The initially uniform orientation of the stress field in these systems evolves with increasing disorder, which explains much of the scatter commonly observed in data sets of stress measurements. Based on these results, implications for stress measurement programs and numerical stability analysis of faults in mines are discussed.
Article
Recent advances in the theory of fracture and fragmentation are reviewed. Empirical laws in seismology are interpreted from a fractal perspective, and earthquakes are viewed as a self-organized critical phenomenon (SOC). Earthquakes occur as an energy dissipation process in the earth''s crust to which the tectonic energy is continuously input. The crust self-organizes into the critical state and the temporal and spatial fractal structure emerges naturally. Power-law relations known in seismology are the expression of the critical state of the crust. An SOC model for earthquakes, which explains the Gutenberg-Richter relation, the Omori''s formula of aftershocks and the fractal distribution of hypocenters, is presented. A new view of earthquake phenomena shares a common standpoint with other disciplines to study natural complex phenomena with a unified theory.
Article
Full-text available
The concept of self-organizedcriticality evolved from studies of three simplecellular-automata models: the sand-pile, slider-block,and forest-fire models. In each case, there is asteady input and the loss is associated with afractal (power-law) distribution of avalanches. Each of the three models can be associated with animportant natural hazard: the sand-pile model withlandslides, the slider-block model with earthquakes,and the forest-fire model with forest fires. We showthat each of the three natural hazards havefrequency-size statistics that are well approximatedby power-law distributions. The model behaviorsuggests that the recurrence interval for a severeevent can be estimated by extrapolating the observedfrequency-size distribution of small and mediumevents. For example, the recurrence interval for amagnitude seven earthquake can be obtained directlyfrom the observed frequency of occurrence of magnitudefour earthquakes. This concept leads to thedefinition of a seismic intensity factor. Both globaland regional maps of this seismic intensity factor aregiven. In addition, the behavior of the modelssuggests that the risk of occurrence of large eventscan be substantially reduced if small events areencouraged. For example, if small forest fires areallowed to burn, the risk of a large forest fire issubstantially reduced.
Article
Multifractal analysis of the daily river flow data from 19 river basins of watershed areas ranging from 5 to 1.8 × 106 km2 from the continental USA was performed. This showed that the daily river flow series were multifractal over a range of scales spanning at least 23 to 216 days. Although no outer limit to the scaling was found (and for one series this was as long as 74 years duration) for most of the rivers, there is a break in the scaling regime at a period of about one week which is comparable to the atmosphere's synoptic maximum, the typical lifetime of planetary-scale atmospheric structures. For scales longer than 8 days, the universal multifractal parameters characterizing the infinite hierarchy of scaling exponents were estimated. The parameter values were found to be close to those of (small basin) French rivers studied by Tessier et al. (1996). The multifractal parameters showed no systematic basin-to-basin variability; our results are compatible with random variations. The three basic universal multifractal parameters are not only robust over wide ranges of time scales, but also over wide ranges in basin size, presumably reflecting the space—time multiscaling of both the rainfall and runoff processes.
Article
A large series of wildfire records of the Regional Forest Service of Liguria (northern Italy) from 1986 to 1993 was examined for agreement with power-law behavior between frequency of occurrence and size of the burned area. The statistical analysis shows that the idea of self-organized criticality (SOC) applies well to explain wildfire occurrence on a regional basis.
Article
Seismic activity may be treated as a process of slip movements along a complex system of interacting faults. Numerical simulations of quasi-static faulting process were carried out by using two-dimensional model of four parallel faults. We may observe a variety of possible fault zone behaviour, which results from the interactive and dynamical nature of the system. Some preliminary results of three-dimensional modelling are also presented.
Article
Aftershock activity on frequency decay against time is characterized by a power law (the modified Omori formula) of an exponent p, which differs with each aftershock sequence. A theoretical study suggested that p, which is a rate constant of aftershock decay, is related to the fractal dimension of a pre-existing fault system. This has however never been checked. Aftershock activity on size distribution is also characterized by an exponential distribution against magnitude (the Gutenberg—Richter relation) with a slope b. Although p is expected to be related to b, which is related to the partitioning rate of earthquake energy, the relationship has never been established. Here the relation between the p-values and the fractal dimensions of the pre-existing fault systems, and that between the p-values and the b-values are explored, using natural seismicity data and data of the observable fault systems. The p- and b-values were estimated for fifteen aftershock sequences which occurred in Japan. In this paper aftershocks were identified on the basis of a phenomenological definition in the seismicity data. The fractal capacity dimensions D0 are estimated for the pre-existing active fault systems observed on the surface in the aftershock regions. In the present paper the standard box-counting method was adopted to get the D0. Negative correlations between (1) p and D0, and (2) p and b were observed with some scattering. Observation (1) shows that the rate of aftershock decay p decreases systematically with increasing occupancy rate of the pre-existing active fault system D0 and suggests that aftershock decay dynamics is constrained by the pre-existing fracture field. Observation (2) shows that p certainly has a relation with b. Moreover, we offer possible interpretation on these negative correlations and some scatters in both observations: the scatters are interpreted as the scatter of the difference of two fractal dimensions between 3-D fracture construction in the crust and 2-D cross-sectional surface (observed active fault system). Supported by further tests, this paper strongly suggests that the scaling for a natural fracture system is self-affine (with different fractal scalings in different directions) rather than self-similar, which would be a manifestation of regional anisotropy of the fracture system, and that the seismic parameters p and b depend on the 3-D construction of the fracture system in the crust.
Article
Full-text available
A scientific model need not be a passive and static descriptor of its subject. If the subject is affected by the model, the model must be updated to explain its affected subject. In this study, two models regarding the dynamics of model aware systems are presented. The first explores the behavior of "prediction seeking" (PSP) and "prediction avoiding" (PSP) populations under the influence of a model that describes them. The second explores the publishing behavior of a group of experimentalists coupled to a model by means of confirmation bias. It is found that model aware systems can exhibit convergent random or oscillatory behavior and display universal 1/f noise. A numerical simulation of the physical experimentalists is compared with actual publications of neutron life time and mass measurements and is in good quantitative agreement.
Article
Full-text available
Many factors complicate earthquake sequences, including the heterogeneity and self-similarity of the geological medium, the hierarchical structure of faults and stresses, and small-scale variations in the stresses from different sources. A seismic process is a type of nonlinear dissipative system demonstrating opposing trends towards order and chaos. Transitions from equilibrium to unstable equilibrium and local dynamic instability appear when there is an inflow of energy; reverse transitions appear when energy is dissipating. Several metastable areas of a different scale exist in the seismically active region before an earthquake. Some earthquakes are preceded by precursory phenomena of a different scale in space and time. These include long-term activation, seismic quiescence, foreshocks in the broad and narrow sense, hidden periodical vibrations, effects of the synchronization of seismic activity, and others. Such phenomena indicate that the dynamic system of lithosphere is moving to a new state – catastrophe. A number of examples of medium-term and short-term precursors is shown in this paper. However, no precursors identified to date are clear and unambiguous: the percentage of missed targets and false alarms is high. The weak fluctuations from outer and internal sources play a great role on the eve of an earthquake and the occurrence time of the future event depends on the collective behavior of triggers. The main task is to improve the methods of metastable zone detection and probabilistic forecasting.
Article
Dynamics of exploratory behaviour of rats and home base establishment is investigated. Time series of instantaneous speed of rats was computed from their position during exploration. The probability distribution function (PDF) of the speed obeys a power law distribution with exponents ranging from 2.1 to 2.32. The PDF of the recurrence time of large speed also exhibits a power law, P(τ) ~ τ(⁻β) with β from 1.56 to 2.30. The power spectrum of the speed is in general agreement with the 1/f spectrum reported earlier. These observations indicate that the acquisition of spatial information during exploration is self-organized with power law temporal correlations. This provides a possible explanation for the home base behaviour of rats during exploration. The exploratory behaviour of rats resembles other systems exhibiting self-organized criticality, e.g., earthquakes, solar flares etc.
Article
Full-text available
We propose a new avalanching model which is characterized by a) a local threshold in the transition from passive to active states, b) finite life time of active sites, and c) is dissipative. This model seems to be more appropriate for the description of a continuous system where localized reconnection plays a crucial role. The model allows for an analytical treatment. We establish the shape of the distribution of cluster sizes and the relation of the observables to the model parameters. The results are illustrated with numerical simulations which support the analytical results.
Article
Full-text available
Nosologically, Alzheimer's disease (AD) is not a single disorder. Missense gene mutations involved in increased formation of the amyloid-beta protein precursor derivatives amyloid-beta (Abeta(1-40) and Abeta(1-42/43) lead to autosomal dominant familial AD, found in the minority of AD cases. However, millions of subjects suffer from sporadic AD (sAD) of late onset, for which no convincing evidence suggests Abeta as the primary disease-generating compound. Environmental factors operating during pregnancy and postnatally may affect susceptibility genes and stress factors (e.g., cortisol), consequently affecting brain development both structurally and functionally, causing diseases that only becoming manifest late in life. With aging, a desynchronization of biological systems may result, increasing further brain entropy/declining criticality. In sAD, this desynchronization may involve stress components, cortisol and noradrenaline, reactive oxygen species, and membrane damage as major candidates causing an insulin resistant brain state with decreased glucose/energy metabolism. This further leads to a derangement of ATP-dependent cellular and molecular work, of the cell function in general, as well as derangements in the endoplasmic reticulum/Golgi apparatus, axon, synapses, and membranes, in particular. A self-propagating process is thus generated, including the increased formation of hyperphosphorylated tau-protein and Abeta as abnormal terminal events in sAD rather than causing the disorder, as elaborated in the review.
ResearchGate has not been able to resolve any references for this publication.