ArticlePDF Available

Sensitivity of Probabilistic Tsunami Hazard Assessment to Far-Field Earthquake Slip Complexity and Rigidity Depth-Dependence: Case Study of Australia

Authors:

Abstract and Figures

Probabilistic Tsunami Hazard Assessment (PTHA) often proceeds by constructing a suite of hypothetical earthquake scenarios, and modelling their tsunamis and occurrence-rates. Both tsunami and occurrence-rate models are affected by the representation of earthquake slip and rigidity, but the overall importance of these factors for far-field PTHA is unclear. We study the sensitivity of an Australia-wide PTHA to six different far-field earthquake scenario representations, including two rigidity models (constant and depth-varying) combined with three slip models: fixed-area-uniform-slip (with rupture area deterministically related to magnitude); variable-area-uniform-slip; and spatially heterogeneous-slip. Earthquake-tsunami scenarios are tested by comparison with DART-buoy tsunami observations, demonstrating biases in some slip models. Scenario occurrence-rates are modelled using Bayesian techniques to account for uncertainties in seismic coupling, maximum-magnitudes and Gutenberg-Richter b-values. The approach maintains reasonable consistency with the historical earthquake record and spatially variable plate convergence rates for all slip/rigidity model combinations, and facilitates partial correction of model-specific biases (identified via DART-buoy testing). The modelled magnitude exceedance-rates are tested by comparison with rates derived from long-term historical and paleoseismic data and alternative moment-conservation techniques, demonstrating the robustness of our approach. The tsunami hazard offshore of Australia is found to be insensitive to the choice of rigidity model, but significantly affected by the choice of slip model. The fixed-area-uniform-slip model produces lower hazard than the other slip models. Bias adjustment of the variable-area-uniform-slip model produces a strong preference for ‘compact’ scenarios, which compensates for a lack of slip heterogeneity. Thus, both heterogeneous-slip and variable-area-uniform-slip models induce similar far-field tsunami hazard.
Content may be subject to copyright.
Sensitivity of Probabilistic Tsunami Hazard Assessment to Far-Field Earthquake Slip
Complexity and Rigidity Depth-Dependence: Case Study of Australia
GARETH DAVIES
1
and JONATHAN GRIFFIN
1,2
Abstract—Probabilistic Tsunami Hazard Assessment (PTHA)
often proceeds by constructing a suite of hypothetical earthquake
scenarios, and modelling their tsunamis and occurrence-rates. Both
tsunami and occurrence-rate models are affected by the represen-
tation of earthquake slip and rigidity, but the overall importance of
these factors for far-field PTHA is unclear. We study the sensitivity
of an Australia-wide PTHA to six different far-field earthquake
scenario representations, including two rigidity models (constant
and depth-varying) combined with three slip models: fixed-area-
uniform-slip (with rupture area deterministically related to mag-
nitude); variable-area-uniform-slip; and spatially heterogeneous-
slip. Earthquake-tsunami scenarios are tested by comparison with
DART-buoy tsunami observations, demonstrating biases in some
slip models. Scenario occurrence-rates are modelled using Baye-
sian techniques to account for uncertainties in seismic coupling,
maximum-magnitudes and Gutenberg-Richter b-values. The
approach maintains reasonable consistency with the historical
earthquake record and spatially variable plate convergence rates for
all slip/rigidity model combinations, and facilitates partial correc-
tion of model-specific biases (identified via DART-buoy testing).
The modelled magnitude exceedance-rates are tested by compar-
ison with rates derived from long-term historical and paleoseismic
data and alternative moment-conservation techniques, demon-
strating the robustness of our approach. The tsunami hazard
offshore of Australia is found to be insensitive to the choice of
rigidity model, but significantly affected by the choice of slip
model. The fixed-area-uniform-slip model produces lower hazard
than the other slip models. Bias adjustment of the variable-area-
uniform-slip model produces a strong preference for ‘compact’
scenarios, which compensates for a lack of slip heterogeneity.
Thus, both heterogeneous-slip and variable-area-uniform-slip
models induce similar far-field tsunami hazard.
Key words: Probabilistic tsunami hazard assessment,
sensitivity analysis.
1. Introduction
Destructive tsunamis are most often generated by
large subduction zone earthquakes (Grezio et al.
2017). Although the highest runup usually occurs
near to the source, earthquake-generated tsunamis
show strong directivity and can remain hazardous at
trans-oceanic distances (Ben-Menahem and Rosen-
man 1972). This was illustrated by the far-field
impacts of the 2004 Sumatra-Andaman tsunami (300
deaths in Somalia), the 1960 Chile tsunami (203
deaths in Hawaii and Japan), and the 1946 Aleutian
tsunami (162 deaths in California, the Marquesas and
Hawaii) (Okal et al. 2002; Fritz and Borrero 2006;
Okal 2011). The latter sites range between 4000 and
17,000 km from the tsunami source. Probabilistic
Tsunami Hazard Assessments (PTHAs) suggest far-
field subduction earthquakes can contribute at first-
order to the hazard even for sites exposed to near-
field subduction sources, such as Crescent City (near
Cascadia) and Napier (near the Hikurangi trench)
(Gonzalez et al. 2009; Geist and Parsons 2016; Power
et al. 2017).
A key challenge for earthquake-generated tsunami
hazard assessments concerns representing earth-
quakes and their occurrence-rates given substantial
uncertainties in the underlying science (Selva et al.
2016; Davies et al. 2017; Power et al. 2017). Most
studies follow a computational approach which
requires specifying each earthquake scenario’s loca-
tion, moment magnitude Mw, focal mechanism,
rupture extent, rigidity and spatial distribution of slip,
and subsequently modelling the resulting tsunami
(Grezio et al. 2017). The plausible variation of
earthquake parameters and occurrence-rates is often
very large; e.g., on the Kermadec-Tonga trench
1
Positioning and Community Safety Division, Geoscience
Australia, Canberra, Australia. E-mail: gareth.davies@ga.gov.au
2
Present Address: Department of Geology, University of
Otago, PO Box 56, Dunedin 9054, New Zealand.
Pure Appl. Geophys. 177 (2020), 1521–1548
2019 The Author(s)
https://doi.org/10.1007/s00024-019-02299-w Pure and Applied Geophysics
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Berryman et al. (2015) suggested the maximum
earthquake magnitude Mw;max could be anywhere
between [8.1–9.6], implying large uncertainty in
potential far-field tsunami impacts. The representa-
tion of rupture area, spatial slip variability and
rigidity is also not standardized, with different
approaches potentially leading to first-order differ-
ences in the modelled tsunami size when Mwis fixed
(Geist and Bilek 2001; Gica et al. 2007; Davies et al.
2015; Mueller et al. 2015; Li et al. 2016; Butler et al.
2017; Mori et al. 2017). Compared with scenario
based hazard assessments, Probabilistic Tsunami
Hazard Assessment (PTHA) methodologies have the
great advantage that such uncertainties can be
explicitly integrated into the analysis, but the hazard
calculations remain sensitive to choice of models and
representation of uncertainties (Grezio et al. 2017).
Competing models/parameters must be weighted
appropriately to ensure limited weight is placed on
unlikely models (which is often nontrivial to ensure
in practice, as for Probabilistic Seismic Hazard
Assessment, Bommer and Scherbaum 2008). Baye-
sian methods offer a useful approach to this problem,
as initial weights can be updated based on the mod-
el’s consistency with data (Parsons and Geist 2009;
Grezio et al. 2010,2017; Selva et al. 2016; Davies
et al. 2017). However testing and sensitivity analyses
remain critical for informing PTHA modelling deci-
sions, by focussing attention on the most significant
model weaknesses and the most influential sources of
uncertainty (Li et al. 2016; Sepu
´lveda et al. 2019;
Volpe et al. 2019).
Currently there is no consensus regarding how
earthquake rupture complexity (i.e. variability of
fault dimensions and spatial slip distribution) should
be represented for PTHA. Often rupture complexity
is considered most important for near-field tsunamis
and of limited significance at far-field sites (e.g. Geist
2002; Okal and Synolakis 2008), whereas other
studies suggest it is important also for far-field tsu-
namis (i.e. more than 1000 km from the source, Gica
et al. 2007; Li et al. 2016; Butler et al. 2017). In the
near-field case, it is clear that if Mwand the rupture-
area are fixed then slip heterogeneity substantially
influences modelled tsunami wave heights and inun-
dation (Geist 2002; Mueller et al. 2015; Ruiz et al.
2015). However this does not necessarily imply that
near-field tsunami hazard assessments must use
heterogeneous-slip. An et al. (2018) found that if the
rupture area and location were calibrated, then opti-
mal uniform-slip scenarios matched near-field
tsunami observations nearly as well as optimal
heterogeneous-slip scenarios. Fewer studies have
focussed on the far-field case. Okal and Synolakis
(2008) compared the far-field tsunami radiation pat-
tern of a uniform-slip scenario with a weakly
heterogeneous scenario (slip within 80–120% of the
mean). The far-field tsunami radiation pattern was
similar in both cases; however, finite-fault inversions
suggest historical earthquakes have much more than
20% slip variation (e.g. Poisson et al. 2011; Lay
2018) which may induce greater far-field effects. Li
et al. (2016) compared PTHA calculations derived
from uniform and heterogeneous-slip earthquakes on
the Manila trench, finding slip heterogeneity sub-
stantially increased the peak nearshore tsunami
amplitude at both near-field and far-field sites (e.g.
35% at 500 year average return interval (ARI)).
Butler et al. (2017) modelled tsunamis in Hawaii due
to a range of uniform-slip Mw8:6 Aleutian Island
earthquakes with varying length and width, finding a
factor 2 variation in the modelled far-field tsunami
size. Butler et al. (2017) also found Mw9:25 Aleutian
earthquakes with higher near-trench slip (and reduced
deep slip) produced more inundation in Hawaii than
similar uniform-slip scenarios, with some locations
being more sensitive to shallow slip than others. Gica
et al. (2007) studied the sensitivity of tsunami wave
heights in Hawaii to variations in the dimensions of
uniform-slip earthquakes in Chile, Japan, and the
Aleutians. Their modelled wave heights varied by a
factor 2 due to doubling/halving the rupture length,
width and slip (while preserving Mw). They also
report that, along the main beam of tsunami energy,
the sensitivity of the wave to the rupture dimensions
did not decrease with increasing distance from the
source, suggesting that rupture complexity influences
even the far-field tsunami behaviour.
In addition to rupture complexity, PTHAs may be
affected by the representation of the fault rigidity l.
The rigidity mediates the relationship between Mw
and slip (and thus the tsunami size); all else being
equal, lower limplies higher slip for fixed Mwand
produces a larger offshore tsunami. In practice the
1522 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
rigidity of subduction zones is not very well con-
strained (Bilek and Lay 1999; Geist and Bilek 2001),
but tsunami hazard studies often assume constant l
361010 Pa (e.g. Butler et al. 2017; Fukutani
et al. 2018; Kalligeris et al. 2017). Despite this
common practice, the use of depth-varying l(with
low near-trench values) appears necessary to simulate
the large tsunamis observed historically following
shallow ‘tsunami-earthquake’ events (e.g. Geist and
Bilek 2001; Newman et al. 2011b;He
´bert et al.
2012), albeit not in every such case (Newman et al.
2011a). For example, low rigidity was used to sim-
ulate near-field runup of up to 10 m resulting from the
2006 Mw7:7 Java tsunami-earthquake (He
´bert et al.
2012), an event which also produced the highest
historically observed tsunami runup in Australia [7.9
m at Steep Point 1800 km from the source, Pren-
dergast and Brown (2012)].
Even though the rigidity model is highly signifi-
cant for interpreting such events, it remains unclear
whether the overall tsunami hazard is sensitive to the
choice of constant or depth-varying rigidity, assum-
ing the modelled earthquake occurrence-rates are
constrained with a combination of earthquake cata-
logue data and moment conservation methodologies
(e.g. as in Kagan and Jackson 2013; Rong et al. 2014;
Davies et al. 2017). The combination of these
methodologies is desirable because earthquake cata-
logue data alone has limited power to constrain high
Mwexceedance-rates (Zo
¨ller 2013,2017). Impor-
tantly, moment conservation arguments imply the
time-integrated earthquake slip-rate should balance
the seismically coupled fraction of the tectonic con-
vergence-rate (e.g. Bird and Kagan 2004; Bird and
Liu 2007), suggesting that for fixed Mw, low-rigidity
high-slip earthquakes should occur less often than
those with higher-rigidity and lower-slip. This would
reduce the effect of low-rigidity tsunami-earthquake
type events on the overall hazard. Thus to understand
how the hazard is affected by the rigidity model, it is
necessary to apply moment-conservation approaches
in conjunction with both constant and depth-varying
rigidity models. Compared with the constant-rigidity
case, moment-conservation approaches are more
complex to apply with depth-varying rigidity because
there is no longer a one-to-one relation between the
scenario Mwand its spatially integrated slip.
This study considers the sensitivity of a PTHA to
the representation of far-field earthquake scenarios
and their occurrence-rates. The work was conducted
as part of the 2018 Australian PTHA and the broader
study is described in two detailed technical reports
(Davies and Griffin 2018; Griffin and Davies 2018).
In addition Davies (2019) tests the modelled tsunami
scenarios against DART-buoy observations without
consideration of scenario frequencies or hazard. The
current study focusses on the hazard calculation for
offshore sites, and its sensitivity to key modelling
assumptions. Inundation hazard assessments can be
developed on a site-by-site basis by combining an
offshore PTHA with high-resolution inundation
models (e.g. Lane et al. 2012), but for simplicity that
step is not undertaken herein. Because the importance
of earthquake rupture complexity for far-field tsu-
nami hazard is unclear, the tests feature three
alternative slip models with varying degrees of
complexity. For all slip models, the sensitivity of the
hazard to the rigidity representation is also examined
by re-interpreting the scenario Mwon the basis of
either constant or depth-varying rigidity. To enable
this the current study develops scenario occurrence-
rate models which are applicable to multiple slip and
rigidity representations. The approach generalises the
global-scale methodology of Davies et al. (2017)
which constrained scenario rates using earthquake
catalogues and plate convergence rates, but was
limited to scenarios with constant rigidity, uniform
slip and a deterministic Mw-vs-area relation. As well
as generalising to a range of slip and rigidity models,
the new occurrence-rate methodology makes more
efficient use of earthquake catalogue data, and leads
to a better match between the modelled long-term
earthquake-slip rates and spatial variations of tectonic
convergence.
2. Earthquake-Tsunami Scenario Database
Below we present key features of the 2018 Aus-
tralian PTHA earthquake-tsunami scenario database
used in this study. Further details are provided in
Davies and Griffin (2018).
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1523
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
2.1. Source-Zone Discretization
The database includes major Pacific and Indian
Ocean earthquake source-zones, and minor sources
close to Australia (Fig. 1a). Fault geometries were
defined using SLAB 2.0 or SLAB 1.0 where possible
(Hayes et al. 2012,2018), and elsewhere were
schematized using linear or parabolic profiles (Griffin
and Davies 2018). The fault geometries extend
between the trench and a maximum seismogenic
depth estimated from Berryman et al. (2015) and
Griffin and Davies (2018). Most source-zones were
modelled with only thrust earthquake scenarios,
which are by far the largest contributor to the
computed hazard in Australia, and for brevity the
treatment of non-thrust sources is not presented
herein.
Two alternative rigidity models are tested, fea-
turing constant-rigidity (l¼30 GPa) and depth-
varying rigidity (Fig. 1b). The latter was fit to rigidity
estimates for subduction earthquakes by Bilek and
Lay (1999) (Fig. 1b). It exhibits low rigidities for
shallow earthquakes (but always 10 GPa), and
transitions to the preliminary reference earth model
(PREM) at greater depths (Dziewonski and Anderson
1981; Geist and Bilek 2001).
Each source-zone was discretized into unit-
sources with dimensions of 50 50 km2, although
this varies to match the source-geometry (Fig. 1c–h).
To better represent non-planar fault geometries the
unit-sources were subdivided into a large number of
rectangular ‘sub-unit-sources’ (Davies et al. 2017).
The vertical co-seismic seabed deformation associ-
ated with 1 m of slip on each unit-source was
computed using the homogeneous elastic half-space
model (Okada 1985) integrated over sub-unit-
sources. The ocean surface deformation was derived
from this using a Kajiura filter (Kajiura 1963;
Figure 1
Overview of the earthquake-tsunami scenario database: aearthquake source-zones used in this study. Numbers show DART buoy locations.
Black points along large source-zones denote segment boundary locations based on Berryman et al. (2015), which are given 50% weight in
the scenario rate calculation (with 50% on an unsegmented model, Sect. 3.1); bRigidity-vs-depth on subduction-zones using data from Bilek
and Lay (1999), with the depth-dependent and constant-rigidity (30 GPa) models used here for thrust scenarios. The preliminary reference
earth model (PREM) is also shown (Dziewonski and Anderson 1981); c,dexample fixed-area-uniform-slip (FAUS) scenarios.; e,fexample
variable-area-uniform-slip (VAUS) scenarios; g,hexample heterogeneous-slip (HS) scenarios. All examples chare on the Kurils–Japan
source-zone
1524 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Glimsdal et al. 2013). The resulting unit-source
tsunami was modelled for 36 h with the linear
shallow water equations on a 1-arc-min grid with
elevation based on GEBCO14 and GA250 (White-
way 2009; Weatherall et al. 2015). The tsunami
model domain includes global longitudes ½40;320,
latitudes from ½72;65, with boundary conditions
being periodic (east–west) and reflective (north–
south) (Davies and Griffin 2018).
2.2. Earthquake-tsunami scenarios
Earthquake scenarios consist of linear combina-
tions of unit-sources (Fig. 1c–h), with the associated
tsunami being a linear combination of the unit-source
tsunamis (Thio et al. 2007; Burbidge et al. 2008). On
each source-zone the constant-rigidity earthquake
scenarios were initially generated as detailed below,
with magnitudes ranging from 7:2;7:3;...;9:7;9:8
for computational convenience. Magnitudes above a
source-zone specific Mw;max\9:8 will later be
assigned a rate of zero (Sect. 3.1), while in practice
Mw7:2 earthquakes will only generate small waves
near Australia.
To understand how the hazard is affected by the
earthquake representation, three different kinds of
constant-rigidity scenarios were created (Fig. 1c–h):
Fixed-area-uniform-slip (FAUS) scenarios have
uniform-slip, with magnitude-dependent length
and width based on the scaling relations of Strasser
et al. (2010) ignoring the predictive uncertainty
terms (Fig. 1c, d). For each magnitude the FAUS
scenarios include a fixed number of unit-sources
along-strike and down-dip (e.g. 16 4 for the
examples in Fig. 1c, d, determined following
Davies et al. (2017)) and the set of unit-sources
is moved through all possible source-zone loca-
tions. By iterating over all magnitudes, the full set
of FAUS scenarios is produced.
Variable-area-uniform-slip (VAUS) scenarios
also have uniform-slip, but account for 2r
predictive uncertainties in the rupture length and
width using the scaling relations of Strasser et al.
(2010) (Fig. 1e, f). At least 15 VAUS scenarios
were generated for each FAUS scenario, all having
the same magnitude and a random length and width
(with independent residuals). The VAUS scenario
location is also random, but at least partially
overlaps the ‘parent’ FAUS scenario. The number
of ‘child’ VAUS scenarios per ‘parent’ FAUS
scenario was increased to more than 15 if neces-
sary, to ensure at least 200 VAUS scenarios on the
source-zone at each magnitude. These numbers
were found sufficient to obtain convergent hazard
results at a range of sites around Australia, which
was tested by splitting the scenario set into two
equal groups and graphically comparing the
tsunami maximum-stage vs return-period curves
(Davies and Griffin 2018).
Heterogeneous-slip (HS) scenarios are generated
using the VAUS scenario’s rupture dimensions but
have spatially non-uniform slip (Fig. 1g, h). The
slip is randomly generated using a k2type model,
specifically the SNCF model of Davies et al. (2015).
The SNCF model was chosen because it had the best
performance among eight k2type models tested
by comparison with 66 finite-fault inversions
(Davies et al. 2015). One HS scenario is generated
per VAUS scenario. This implies a similar parent-
child relationship exists between the FAUS (par-
ent) scenarios and HS (child) scenarios, with 15 or
more ‘children’ for each parent. Multi-site conver-
gence tests indicate there are enough scenarios to
support our hazard calculations (Davies and Griffin
2018).
Further implementation details are provided else-
where (Davies and Griffin 2018; Davies 2019). A key
concept is the parent-child relation between FAUS
scenarios and the other scenario types, which will be
exploited when assigning occurrence-rates to scenar-
ios (Sect. 3.2) and for model testing (Sect. 2.3).
The effect of depth-varying rigidity is simulated
by re-labelling the constant-rigidity scenario magni-
tudes, without otherwise changing their slip or area.
Each unit-source is assigned a depth-dependent
rigidity (Fig. 1b), assuming for simplicity the trench
is 4 km below mean-sea-level, and these rigidities are
used to re-compute each scenario’s magnitude. Sce-
narios in shallow, low-rigidity regions thus behave
like constant-rigidity earthquakes with higher mag-
nitude, and conversely for high-rigidity regions. It is
necessary to adjust the scenario rate computations
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1525
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
with depth-varying rigidity to maintain consistency
with instrumental magnitude observations and tec-
tonic constraints (Sect. 3.3), which affects the
resulting hazard. This is not specific to our magni-
tude-relabelling approach; for example, Scala et al.
(2019) propose a completely different treatment of
depth-varying rigidity for PTHA which also requires
scenario rate modifications.
2.3. Biases in Tsunami Scenarios
Scenario generation methods were tested by
comparing random database scenarios with 18 earth-
quake-generated tsunamis observed at DART buoys
in 2006–2016 (see two examples in Fig. 2a, b). Full
details are reported elsewhere (Davies 2019); below
we summarise key results that affect the scenario
occurrence-rate modelling in this study. For each
Figure 2
aExample good-fitting database scenarios for the 2014/04/01 Mw8:2 Iquique (Chile) earthquake-generated tsunami using the constant l
FAUS, VAUS and HS models. Each panel compares the tsunami observed at a DART buoy with the same three database scenarios (see Fig. 1
for DART locations). Vertical scale in meters. Modelled time-series are temporally offset by the optimum value determined in the goodness-
of-fit calculation; bsame as panel-A for the 2011/03/11 Mw9:1 Tohoku tsunami, using the depth-varying lscenarios. For brevity only 7 of the
DART observations are shown; cmagnitude vs mean-slip for the 5 best-fitting database scenarios for all 18 test events (constant lcase). The
scaling relation mean-slip is inferred from the Strasser et al. (2010) length and width scaling relations assuming uncorrelated residuals. The
‘compact-uniform-slip’ curve was derived from the area scaling relation of An et al. (2018), A¼2:89 1011M2=3
0, assuming l¼44 GPa
because they used PREM rigidities; dprobability density for the maximum-slip percentile of good-fitting events. This is not applied to FAUS
scenarios because by construction they have little variability
1526 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
scenario generation method, each of the 18 observed
tsunamis was compared with all database scenarios
having ‘similar earthquake location and magnitude’.
Scenarios are defined as having ‘similar earthquake
location and magnitude’ as an observed event if their
Mwis within 0:15 of the GCMT catalogue value,
and their parent FAUS scenario includes unit-sources
within half a scaling-relation length and width of the
GCMT hypocenter (Davies 2019). This approach to
testing was applied because in PTHA, scenario
occurrence-rates are generally modelled as a function
of the earthquake location and magnitude (e.g. Selva
et al. 2016; Power et al. 2017; Davies et al. 2017);
thus to avoid biases in the hazard, the ‘randomly
generated tsunami waveforms’ should represent real
tsunamis generated by earthquakes with similar
location and magnitude. For each observed event, a
weighted least-squares goodness-of-fit statistic was
used to measure the agreement between the observed
tsunami at DART buoys and all aforementioned
scenarios (Davies 2019). The goodness-of-fit statistic
includes an optimal time-offset (Lorito et al. 2008;
Romano et al. 2016; Ho et al. 2019) to account for
processes that can delay wave-propagation, but are
not treated in our linear shallow water model (Watada
et al. 2014; Allgeyer and Cummins 2014; Baba et al.
2017).
Figure 2a, b illustrates the best-fit FAUS, VAUS
and HS database scenarios identified with the above
procedure for 2 of the 18 test events; the 2014 Mw8:2
Iquique (Chile) tsunami and the 2011 Mw9:1 Tohoku
tsunami (see Davies and Griffin 2018; Davies 2019
for other examples). For the 2014 event a reasonable
agreement is obtained between observations and the
best database scenario for every model type (FAUS,
VAUS, HS) (Fig. 2a). In contrast, for the Tohoku
event all FAUS scenarios produced long-period low-
amplitude waves which poorly matched observations
both near and far from the earthquake source,
whereas the best VAUS and HS scenarios performed
well (Fig. 2b). These visual observations are consis-
tent with quantitative goodness-of-fit results (Davies
2019).
Analysis of the good-fitting earthquake scenarios
was used to further assess model biases (Fig. 2c, d;
Davies (2019)). If a model gives an unbiased
representation of random tsunamis, then the
properties of good-fitting earthquake scenarios should
not differ systematically from random scenarios with
similar location and magnitude, when considered
over all 18 test events. Conversely if a model has
some bias (e.g. producing too many low-slip, high-
area scenarios), we may see statistical differences
between good-fitting and random earthquake scenar-
ios. Figure 2c, d suggests the good-fitting VAUS
model scenarios most often have high slip relative to
the scaling-relation used to construct them, which
indicates bias in the VAUS model. In contrast, the
good-fitting HS scenarios exhibit mean-slip variabil-
ity within the expected 2rrange of random
scenarios, without any strong preference for high or
low values (Fig. 2c, d). The FAUS scenarios have
little variability by construction and so their biases
cannot be usefully quantified with this approach, but
we emphasise that some historical events are not well
modelled with FAUS scenarios (e.g. Tohoku,
Fig. 2b).
The VAUS model biases are qualitatively consis-
tent with the results of An et al. (2018). They
simulated near-field tidal gauge and DART buoy
observations for six tsunamis using both heteroge-
neous and uniform slip models, and found near-
optimal results could consistently be obtained using
compact uniform-slip earthquakes with a low aspect
ratio (L¼W) and high-slip (Fig. 2c). We infer the
compact nature of good-fitting VAUS scenarios
allows representation of rupture asperities, which
have a dominant influence on the resulting tsunami.
The HS model can simulate those asperities directly
and so avoids similar biases (Davies 2019).
3. Scenario Rates
The scenario rate modelling methodology was
designed to meet the following objectives:
1. Applicable to all earthquake-slip and rigidity
models in Sect. 2.2.
2. On each source-zone the modelled magnitude-
frequency distribution should be reasonably con-
sistent with plate convergence rates and
instrumental seismicity.
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1527
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
3. The method should treat uncertainties in the
seismic coupling, maximum magnitudes and the
Gutenberg-Richter bvalue.
4. Spatial variations in the scenario rates should
reflect variations in plate convergence.
Our approach generalises that of Davies et al. (2017),
because the latter is only applicable to FAUS type
scenarios with constant rigidity (violating objective
1) and has some weaknesses regarding spatial
variations in scenario rates (objective 4) that are
addressed herein. Our new approach also makes more
efficient use of earthquake catalogue data to constrain
uncertainties (objective 3). The approach involves
firstly modelling each source-zone’s magnitude-fre-
quency distribution (Sect. 3.1), and secondly
partitioning these rates among individual scenarios
(Sect. 3.2). Possible segmentation of large source-
zones (boundaries in Fig. 1) is treated by applying the
model separately to: (A) the unsegmented source-
zone, and; (B) the union of individual segments; with
50% weight on each interpretation. Ruptures are
allowed to cross segment boundaries in any case (as
occurred e.g. for the 2007 Solomons earthquake,
Lorito et al. (2015a)) so the primary effect of
segmentation is to enhance spatial variations in the
source-zone’s magnitude-frequency distribution. The
use of depth-varying-rigidity introduces additional
complications which are addressed in Sect. 3.3. The
results are tested in Sect. 3.4. The calculations which
convert magnitude exceedance-rates to tsunami haz-
ard metrics are described in Sect. 3.5.
3.1. Source-Zone Integrated Magnitude Exceedance-
Rates
Seismicity on each source-zone is assumed to
follow one of two Gutenberg-Richter (GR) type
relations with different tail behaviour; a characteristic
GR model (Kagan 2002):
GRðxÞ¼10abx for xMw;max
¼0 for x[Mw;max
ð1Þ
and a truncated GR model (Kagan 2002):
GRðxÞ¼10abx 10abMw;max for xMw;max
¼0 for x[Mw;max
ð2Þ
For both models GR(x) gives the rate of earthquakes
(events/year) with magnitude xas a function of
three unknown parameters: a,b, and the maximum
magnitude Mw;max.
Uncertainty in each source-zone’s magnitude-
frequency distribution is represented using a logic-
tree (Annaka et al. 2007). The logic-tree includes
every combination of the GR model type and the
parameters b,Mw;max, and the seismic coupling
fraction c. The Gutenberg-Richter avalue is derived
from these via moment conservation arguments (Bird
and Kagan 2004; Bird and Liu 2007; Davies et al.
2017). The individual parameters vary through a
source-zone specific set of values with ‘prior weights’
described below. The prior weight of each parameter
combination is defined as the product of its individual
parameter prior weights. Some parameter combina-
tions will predict unrealistic magnitude-frequency
distributions compared with historical seismicity and
this is dealt with by using earthquake catalogue data
to update the weights via Bayes theorem (Davies
et al. 2017).
The two GR models (Eqs. 1,2) are assigned prior
weights of 30% (characteristic) and 70% (truncated).
The bparameter is assigned twenty equally spaced
values between 0.7 and 1.2 (Berryman et al. 2015;
Davies et al. 2017) with uniform prior weights. The
coupling prior weights are a 50:50 combination of
two different distributions. The first takes the lower,
preferred and upper coupling values from Berryman
et al. (2015) (with default values of 0.3, 0.5, 0.7
where no information is available) and assumes they
define the lower limit, median and upper-limit of the
prior coupling cumulative distribution function, with
linear interpolation between these values. The second
is a uniform distribution assigned to 20 coupling
values in [0.1, 1.3]. This latter distribution is
deliberately uninformative. The lower coupling limit
(c¼0:1) is conservative, to prevent source-zones
with rapid convergence and limited historical seis-
micity from being assigned a very low coupling after
the Bayesian weight-update. Considering the model
herein cannot treat non-stationary seismicity, which
1528 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
could allow for longer time intervals between large
earthquakes than expected under stationarity, such
conservatism is warranted (Stirling and Gersten-
berger 2018). The upper coupling limit (c¼1:3) is
also conservative and deliberately exceeds the phys-
ical limit (c1). This was done because modelled
seismicity depends on the product of c, the fault area
AT, and the mean horizontal tectonic convergence
rate _
s, but uncertainties in the geometry and conver-
gence are not directly treated in our methodology.
The _
svalues are defined using the models of Bird
(2003), Koulali et al. (2015,2016) and Griffin and
Davies (2018), averaged over the source-zone. If the
convergence direction is oblique to the trench at any
point then the component of convergence that
maximises _
sis used, up to a maximum of 50away
from pure thrust. This allows oblique subduction as
suggested for the Puysegur source-zone (Hayes and
Furlong 2010). Although potentially enabling over-
estimation of seismicity, this is counteracted by the
Bayesian update which will down-weight high cval-
ues when inconsistent with observed seismicity (e.g.
on ‘quiet’ source-zones with substantial conver-
gence). By partially weighting the Berryman et al.
(2015) coupling values the model can reflect the
results of paleoseismic and geodetic analyses, which
are informative for some source-zones (e.g. Cascadia)
but not otherwise straightforward to include.
The maximum magnitude Mw;max ranges between
lower and upper limits. The lower limit reflects the
largest earthquake thought to have occurred on the
source-zone (based on Ekstrom et al. 2012; Storchak
et al. 2012; Berryman et al. 2015), plus a small
perturbation (0.05) to ensure it has non-zero rate
according to the truncated GR model (Eq. 2). On the
South-American trench, a lower limit Mw;max ¼9:2is
used to represent the 1960 Chile earthquake for
consistency with tsunami and geodetic inversions
(Moreno et al. 2009; Fujii and Satake 2013), even
though seismic wave inversions suggest a higher
magnitude (9:5; Kanamori 1977; Engdahl and
Villasenor 2002). The upper limit Mw;max is defined
using the minimum of two scaling-relation based
constraints. Firstly Mw;max is less than the magnitude
of a ‘compact’ earthquake that fills the source zone
according to the Mw-vs-area scaling relation of
Strasser et al. (2010), where the area is evaluated at
-1 prediction standard-deviation to represent a
‘compact’ event. Secondly Mw;max is less than the
magnitude of a ‘narrow’ earthquake which fills the
down-dip width of the source-zone, using the Mw-vs-
width scaling relation of Strasser et al. (2010) with
width evaluated at -2 prediction standard deviations
(representing a ‘narrow’ earthquake). Forty Mw;max
values are initially created with equal spacing
between these lower and upper limits, and uniform
prior weights. Finally all Mw;max values are subse-
quently clipped to a maximum of 9.6, because
scaling-relation based Mw;max limits can be very high
on large source-zones (Berryman et al. 2015; Davies
et al. 2017).
The GR parameter ais derived from the above
parameters using a fault-based seismic-moment con-
servation approach (Bird and Kagan 2004; Bird and
Liu 2007; Davies et al. 2017). The relation between
earthquake scenarios and tectonic convergence is:
X
e2E
reðSAÞe¼ATðc_
sÞn
cosðdÞð3Þ
Here edenotes an individual earthquake scenario
within the set of modelled scenarios E,reis its rate
(events/year), and ðSAÞeis its spatially integrated slip
(m3). The full source-zone fault area is AT(m2) with
average coupled horizontal convergence rate ðc_
sÞ(m/
year) and cosine of mean dip cosðdÞ. See Meade and
Loveless (2009) for justification of the latter factor in
Eq. 3, which is not universally applied in moment
balance approaches (e.g. Kagan and Jackson 2013);
irrespective for our analysis its influence is largely
offset when constraining the coupling coefficient
(below). The factor n01accounts for the
convergence due to earthquakes with magnitude
\Mw;min, which are not represented among the
modelled events E. For the current study
Mw;min ¼7:2, but given our scenario discretization
this represents a bin of magnitudes 7:2D=2 where
D¼0:1 (Sect. 2.2). Assuming constant rigidity, the
proportion of integrated slip due to the modelled
scenarios is:
n¼RMw;max
ðMw;minD=2ÞgrðxÞM0ðxÞdx
RMw;max
1 grðxÞM0ðxÞdx
ð4Þ
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1529
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
where M0ðxÞis the seismic moment as a function of
the magnitude x, and grðxÞ¼dGR
dx (technically we
assume infinitesimal smoothing is applied to the GR
models near x¼Mw;max so the derivative is well
defined). Due to cancellation Eq. 4is independent of
a. Assuming constant rigidity, the aparameter can be
derived from Eqs. 3and 4using c;_
s;AT;d;band
Mw;max (known for each logic-tree branch). It is not
necessary to know the individual scenario rates re
because with constant rigidity the spatially integrated
slip ðSAÞeis a function of magnitude alone. This
reasoning fails with depth-varying rigidity, necessi-
tating a special treatment of that case (Sect. 3.3).
An extremely wide range of Gutenberg–Richter
type models GRiresult from these priors, as illus-
trated on the unsegmented Kermadec–Tonga source-
zone (Fig. 3a). Both individual logic-tree branches
and the prior-mean curve itself may deviate substan-
tially from the observed seismicity rates. On all
source-zones the observed seismicity is taken from
the GCMT catalogue including events with
Mw[ðMw;min D=2Þ¼7:15, hypocenter within
0.4of the unit-sources, depth \71 km, having at
least one nodal plane with rake within 50of pure
thrust and strike within 50of the nearest unit-
source’s strike.
To obtain better agreement between the rate
models and aforementioned data, the prior weights
are updated using Bayes theorem:
wi¼w0
iLðdatajiÞ
Piw0
iLðdatajiÞð5Þ
Here wiis the posterior weight of the i’th logic-tree
branch, w0
iis the prior weight, and LðdatajiÞis the
likelihood of the data if the i’th logic-tree branch is
true. The likelihood is calculated from the rate model
and the assumption of stationary seismicity:
LðdatajiÞ¼ LcountðdatajiÞLmagnitude ðdatajiÞð6Þ
LcountðdatajiÞ¼expðkiÞðkiÞn
n!ð7Þ
Figure 3
Source-zone magnitude-frequency modelling, using the unsegmented Kermadec-Tonga source as an example. aMagnitude-frequency curves
prior to weight update. Note the individual logic-tree branches (thin grey curves) appear as a grey shading in much of the plot because there
are many; bmagnitude-frequency curves after weight update; cprior and posterior cweights; dprior and posterior Mw;max weights; eprior and
posterior b-value weights
1530 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
LmagnitudeðdatajiÞ¼ T
ki

n
Y
n
l¼1
dGRi
dx ðMw;lÞ

ð8Þ
Equation 7is a standard model for observed counts
from a Poisson process, where nis the number of
events in the data which spans an observation time T
(years), and ki¼GRiðMw;min D=2ÞTgives the
mean number of events predicted by the i’th logic-
tree branch in time T. Equation 8was derived by
converting GRito a probability density for a random
magnitude, with the likelihood being the product of
the density at each observed magnitude Mw;l. The
approach improves upon that of Davies et al. (2017)
which ignored the observed magnitude distribution
(i.e. Lmagnitude ¼1). In the special case with no
observed events (n¼0), the current study also
assumes Lmagnitude ¼1. The observation of zero
events is nonetheless informative because it suggests
logic-tree branches which predict frequent events are
unlikely to be correct, and Eq. 7will down-weight
such branches accordingly.
Bayesian updating has a dramatic impact on the
modelled earthquake rates and their uncertainties
(Fig. 3a, b). In general the mean rate becomes more
consistent with historical observations, and uncer-
tainties at low magnitudes are reduced (Fig. 3b). On
the unsegmented Kermadec–Tonga source-zone, the
results reflect that historical seismicity was relatively
low compared to the prior model (Fig. 3a). In
principle this could indicate low coupling, and/or
that significant slip is released in higher magnitude
earthquakes (in which case frequent low-magnitude
earthquakes are not required to balance tectonic
convergence). However the observations are unlikely
if cis high and Mw;max is low, because in this case
more frequent low-magnitude earthquakes are
required to balance tectonic convergence. As a result
the Bayesian update shifts more weight onto lower c
values, and higher Mw;max values (Fig. 3c, d).
3.2. Partitioning the Source-Zone Integrated
Exceedance-Rates Among Scenarios
For each logic-tree branch i, the individual
scenario occurrence-rate re;i(events/year) for any
scenario ewith magnitude Mw;emust be consistent
with the source-zone integrated exceedance-rate
function GRi. Supposing all scenarios have magni-
tudes arranged as Mw;min;Mw;min þD;Mw;min þ2D;...
representing bins of width D(as for our constant
rigidity scenarios where D¼0:1), this implies:
re;i¼PrðejMw¼Mw;eÞGRiðMw;eD=2Þ
GRiðMw;eþD=2ÞÞ ð9Þ
Here PrðejMw¼Mw;eÞis the conditional probability
that scenario eoccurs, assuming some random sce-
nario with the same magnitude occurs on the same
source-zone. This will be modelled as independent of
the logic-tree branch i.
Many studies assume uniform scenario condi-
tional probabilities (e.g. Horspool et al. 2014;
Løvholt et al. 2014; Lorito et al. 2015b):
PrðejMw¼Mw;eÞ/1:ð10Þ
Alternatively, the scenario conditional probability
may be manipulated to make earthquakes more likely
to occur on rapidly converging, wider parts of the
source-zone. For the case of FAUS scenarios with
constant rigidity Davies et al. (2017) proposed to
represent this via:
PrðejMw¼Mw;eÞ/ _
se=Se
ðÞ ð11Þ
where Seis the scenario’s mean slip, and _
seis the
average horizontal convergence rate where eoccurs
on the source-zone.
A problem with both Eqs. 10 and 11 is they
artificially concentrate the time-integrated slip in the
middle of the source zone (along-strike), even when
the tectonic convergence and source-geometry is
uniform. For illustration, consider a FAUS earth-
quake with 3 2 unit-sources which occurs on a
uniform source-zone with 7 2 unit-sources
(Fig. 4a). An earthquake of this size can occur in
five different positions on the source-zone. Among
those five scenarios, unit-sources at the along-strike
edges will be included in only one, whereas unit-
sources in the middle of the source-zone will be
included in three scenarios (Fig. 4a). If all five
scenarios have the same slip and occurrence-rate, the
time-integrated slip rate would be 3 times greater in
the interior of the source-zone than at the along-strike
edges. This conflicts with the assumed uniform
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1531
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
horizontal convergence and illustrates the ‘edge-
effect’ bias.
The same edge-effect bias occurs in realistic
applications which integrate over all scenarios.
Figure 4b–e demonstrates this on the Kurils-Japan
subduction-zone. All examples use constant rigidity
FAUS scenarios, and GRiin Eq. 9is replaced with
the posterior mean magnitude exceedance-rate over
all GRiin the logic-tree. Although the input tectonic
convergence does not vary strongly in space
(Fig. 4b), the time-integrated slip rate is concentrated
in the middle of the source-zone using either Eq. 10
(Fig. 4e) or Eq. 11 (Fig. 4d) as the scenario condi-
tional probability model. The results of these two
approaches would differ more substantially if the
source-zone had greater spatial variations in conver-
gence, but clearly both approaches suffer the edge-
effect bias.
To approximately correct for the edge-effect,
FAUS scenarios which touch an along-strike edge
unit-source should have a higher occurrence-rate.
Herein this is achieved by modifying Eq. 11 to:
PrðejMw¼Mw;eÞ/ _
se=Se
ðÞð1þkIeÞ:ð12Þ
Here Ie¼1 for FAUS scenarios ewhich include unit-
sources on the along-strike edge of the unsegmented
source-zone (and Ie¼0 otherwise), and kis a source-
zone specific constant. For each unsegmented source-
zone, kis determined numerically to give the smallest
least-squares difference between the spatial distribu-
tion of the horizontal tectonic convergence rate and
the model’s time-integrated slip rate (using the pos-
terior mean over all logic-tree branches). To account
for the seismic coupling, which may lead to modelled
slip rates being a fraction of the convergence rate, the
variables are normalised (divided by their mean over
all unit-sources) before computing k; thus only the
Figure 4
Edge-effect biases in time-integrated slip rates. aIf a 3 2 earthquake is moved through all possible locations on a source-zone, then fewer
scenarios will touch unit-sources at the along-strike extremities. If not explicitly treated, this concentrates time-integrated slip in the source-
zone interior; bspatial distribution of the tectonic convergence rate prescribed for the Kurils–Japan source-zone. The modelled time-
integrated slip rate should be similar to this; cmodelled time-integrated slip rates using Eq. 12 to correct for edge-effects; dmodelled time-
integrated slip rates using Eq. 11, which does not include an edge-effect correction; eModelled time-integrated slip rates using the ‘equal
conditional probability’ approach without any edge-effect correction (Eq. 10)
1532 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
spatial pattern of slip is considered. This approxi-
mation implies the conditional probability does not
vary among logic-tree branches, a property shared by
Eqs. 10 and 11. This simplifies some calculations and
reduces the file-storage required for our analysis,
although in-principle a more complex treatment of
kcould be developed. When modelling segmented
source-zones the corresponding unsegmented kvalue
is used, because scenarios can span multiple seg-
ments and receive a partial weight from each based
on the fraction of their moment that occurs there.
Equation 12 leads to better agreement between the
spatial patterns of convergence in the input data and
the model (Fig. 4b, c), as compared with the use of
Eq. 10 (Fig. 4e) or Eq. 11 (Fig. 4d).
When source-zones include three or more unit-
sources down-dip, a related edge-effect occurs in the
down-dip direction. This reduces time-integrated slip
rates in the deepest and shallowest row of unit-
sources relative to the mid-depth unit-sources, which
are included in more scenarios. The down-dip edge-
effect is relatively small compared with the along-
strike variant discussed above; for instance in Fig. 4c
the shallow and deep unit-sources have modelled
time-integrated slip rate on average 81% of the mid-
depth sources. An approximate correction could be
developed by adapting the above approach, but no
attempt was made to do this in the current study
because subduction-zone coupling is likely to be
genuinely higher at mid-depths (e.g. Bilek and Lay
2018).
The arguments in this section thus far apply only
to FAUS scenarios, because justification of Eqs. 11
and 12 requires that rupture area is a deterministic
function of magnitude (Davies et al. 2017). To extend
the approach to non-FAUS scenarios (Fig. 5) the
occurrence-rate of each individual FAUS scenario is
partitioned among its ‘child’ scenarios. Recall each
FAUS scenario has at least 15 ‘child’ HS scenarios
(and similarly for VAUS), all with the same magni-
tude and partial location overlap with their parent
(Sect. 2.2). The simplest occurrence-rate partitioning
method is to equally split the parent FAUS scenario’s
occurrence-rate among its children. However an
unequal partition is preferable to partially correct
known model biases, such as those established earlier
(most significantly for the VAUS model, Sect. 2.3).
Thus in the current study an unequal partition is
Figure 5
Comparison of modelled time-integrated slip rates (posterior mean over all logic-tree branches) on the Kermadec-Tonga source-zone,
computed using the FAUS, VAUS and HS slip models
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1533
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
applied to each group of Nchild scenarios (N15),
separately for VAUS and HS. Each scenario is
assigned a local slip percentile (100r=ðNþ1Þwhere
ris its maximum-slip rank among the other Nsce-
narios), and the curves in Fig. 2d are applied to this
percentile to define the unequal partition weights.
Thus significantly more weight is placed on higher-
slip VAUS scenarios, which show good-fit to obser-
vations more often (Sect. 2.3). In contrast the HS
scenarios have relatively uniform weights (Fig. 2d)
because good-fitting HS scenarios do not show strong
preference for high or low slip.
The occurrence-rate partitioning procedure does
not affect the source-zone’s integrated magnitude-
frequency distribution, so there is no difference in the
frequency of earthquakes with FAUS, VAUS or HS.
However the use of different earthquake slip models
can lead to slight changes to the modelled spatial
distribution of time-integrated slip on unit-sources,
because the slip on a single FAUS scenario will not
be spatially identical to the integrated slip on its child
HS or VAUS scenarios. In practice these differences
average out and are small enough to be ignored
(Fig. 5). For instance at 95% of unit-sources in Fig. 5
the time-integrated slip rate of the HS and VAUS
models differs by \10% from the FAUS result.
3.3. Extension to Depth-Varying Rigidity
The arguments in Sects. 3.1 and 3.2 do not
directly extend to the depth-varying rigidity case,
because there is no longer a one-to-one relation
between a scenario’s magnitude and its spatially
integrated slip ðSAÞe. This complicates the previously
simple relation between magnitude and convergence,
invalidating Eq. 4and the associated aparameter
calculation. It is also not obvious how to apply the
scenario conditional probability model (Eqs. 9and
12) to scenarios with variable-rigidity magnitudes,
considering the latter are not arranged in a small set
of discrete values ð7:2;7:3;...Þ.
A key component of our solution to these
problems is to parameterise the source-zone’s mag-
nitude-frequency distribution as a function of the
earthquake’s ‘constant rigidity magnitude’, even
when the rigidity is modelled as depth-varying.
Obviously the constant-rigidity magnitude then
differs from the ‘true magnitude’. The difference
between these quantities may be treated as a ‘pertur-
bation of the magnitude’ (Fig. 6) and is explicitly
treated below. The key point is that Eqs. 1,2are
applied using the constant-rigidity magnitude for x,
although the posterior logic-tree weights will differ in
the constant and variable-rigidity cases. This GR re-
parameterisation is reasonable given that the constant
and variable rigidity magnitudes are heavily corre-
lated (Fig. 6). Considering the modelled events are
rare, we are unlikely to have sufficient data in the
near future to distinguish the goodness-of-fit of either
parameterisation.
The great benefit of the GR re-parameterisation is
that most reasoning in Sects. 3.1 and 3.2 can be
applied directly to variable-rigidity scenarios, using
constant rigidity magnitudes in Eqs. 1,2,3,4,9
and 12. The prior logic-tree weights are applied
without modification, noting they are in any case very
diffuse. However, adjustments are necessary when
using magnitude observations to update the logic-tree
weights (Eqs. 7,8), because the GRimagnitude-
frequency distribution now gives the exceedance-rate
in terms of the ‘constant-rigidity’ magnitude Mc
w
rather than the ‘depth-varying-rigidity magnitude’
Mv
wwhich is represented by observations. To enable
the logic-tree weight update, the exceedance-rate
function of Mv
w(denoted KiðMv
wÞ) is derived as
detailed below, and then used in place of GRiin
Eqs. 7and 8. This ensures the logic-tree weight
update treats the observed magnitudes as representing
depth-varying rigidity earthquakes.
Figure 6
Constant rigidity magnitude vs depth-varying rigidity magnitude
for HS scenarios on the Kermadec-Tonga source-zone
1534 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
To derive Ki, consider the magnitude perturbation
m0due to depth-varying rigidity:
m0¼Mv
wMc
w:ð13Þ
For a random observed event e, we can consider m0
e
as a random variable with distribution conditional on
its unknown constant-rigidity magnitude (Fig. 6):
Prðm0
eXÞ¼FðXjMc
w;eÞð14Þ
where FðXjMc
w;eÞis the cumulative distribution
function of m0
efor a random scenario ewith constant-
rigidity magnitude Mc
w;e. Figure 6highlights that for
scenarios in our database, the variability of m0is
magnitude dependent, with less variance at higher
magnitudes because ruptures cover a larger area,
averaging out the effect of rigidity variation. In the
current study Fis modelled empirically on each
source-zone using the differences between Mv
wand
Mc
win the scenario database. It will be necessary to
evaluate Fat a continuous set of Mc
wvalues, so
interpolation is used in between the discrete scenario
values ð7:2;7:3;...9:8Þ, while extrapolation outside
this range uses the nearest boundary Mc
wvalue (7.2 or
9.8). Finally, supposing that GRiðxÞgives the source-
zone’s exceedance-rate for Mc
won a particular logic-
tree branch, the associated exceedance-rate for the
variable-rigidity magnitudes is:
KiðMv
wÞ¼Z1
1
1FðMv
wxjxÞ

griðxÞdx ð15Þ
where gri¼dGRi
dx (technically we assume infinites-
imal smoothing of the GR models near Mw;max so the
derivative is well-defined). Notice the term in large
parenthesis in the integrand gives the probability that
the magnitude perturbation is large enough for the
variable rigidity magnitude to exceed Mv
wwhen the
constant rigidity magnitude is x.
The Kiexceedance-rate model is computed
numerically from Eq. 15 and used in place of GRi
for all calculations associated with Eqs. 7and 8, thus
treating the observed magnitudes as having depth-
varying rigidity when defining the logic-tree weights.
The individual scenario occurrence-rates are then
computed as in the constant-rigidity case (Sect. 3.2),
but using revised logic-tree weights.
3.4. Testing the Modelled Magnitude Exceedance-
Rates
As a first test the modelled and observed
seismicity are compared at the global scale (Fig. 7).
Although this data was used to update the logic-tree
weights and is thus not independent of the model, the
comparison is useful because it may make model
biases more obvious than tests at the individual
source-zone level, where data sparsity permits a wide
range of plausible magnitude-frequency curves (e.g.
Fig. 3). The modelled mean magnitude exceedance-
rates are in reasonable agreement with the GCMT
observations using both constant and depth-varying l
(Fig. 7). The models give slightly higher exceedance-
rates than empirical estimates, but are within 95%
confidence intervals for the true exceedance-rate
inferred from the data (Garwood 1936). At magni-
tudes J8:4 the depth-varying lmodel predicts
slightly greater exceedance-rates than the constant l
model (Fig. 7) because large earthquakes occur
predominantly on wide, deep source-zones, where
depth-varying rigidities on average exceed the con-
stant value (30 GPa). For a fixed magnitude, higher
rigidity implies less average slip, thus exceedance-
rates should increase to produce the same horizontal
Figure 7
Comparison of globally integrated magnitude exceedance-rates
model and the GCMT catalogue. The catalogue data combines all
earthquakes used to update the source-zone logic-tree weights. The
modelled exceedance-rates were derived by summing the mean
scenario occurrence-rates. This leads to some artefacts in the depth-
varying-rigidity curve at low magnitudes (around Mw;min), due to
the use of a minimum magnitude combined magnitude-relabelling
technique. However this is inconsequential for the hazard and is
properly accounted for in the logic-tree weight update (Sect. 3.3)
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1535
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
convergence. This effect is partially offset by the
Bayesian update of logic-tree weights using earth-
quake catalogue data, which constrains the
exceedance-rates at commonly occurring magnitudes
irrespective of the rigidity and convergence. Thus the
globally-integrated exceedance-rates do not differ
much at magnitudes .8:4, irrespective of the rigidity
model (Fig. 7).
To compare the models herein with other pub-
lished magnitude exceedance-rates it is useful to
restrict the analysis to some prescribed region (i.e. a
subset of unit-sources that matches the region con-
sidered in other studies, Table 1). To do this, the
modelled occurrence-rates for HS scenarios inside the
prescribed region are summed to create a ‘local’
magnitude exceedance-rate curve. To account for the
magnitude discretization, the local exceedance-rates
are computed directly at magnitude bin-boundaries
(i.e. Mw7:15;7:25;...9:75), and linear interpola-
tion of Mwvs log10ðexceedance-rateÞis used to
evaluate the ‘local’ exceedance-rate at other magni-
tudes. When HS scenarios have only part of their
integrated slip in the prescribed region, they are
included with proportionately down-weighted occur-
rence-rate. Credible intervals are derived by
partitioning the full source-zones percentile uncer-
tainties, using the same approach applied to the mean
curve.
The constant and depth-varying lmodels usually
give similar magnitude exceedance-rates on subsets
of subduction zones (Table 1). Differences reflect the
source-zone’s maximum depth below the trench, as
specified using the upper estimates of Berryman et al.
(2015). On source-zones with relatively deep seis-
micity (Alaska, Chile, Japan, Sumatra—with
modelled depths extending 45–55 km below the
trench) the constant rigidity model predicts slightly
lower exceedance-rates, as was observed for global
seismicity due to the trade-off between higher
rigidities and exceedance-rates under moment con-
servation constraints (Fig. 7). The converse applies
for relatively shallow source-zones (Nankai and
Cascadia, with modelled depths extending 25–30
km below the trench).
The modelled ARIs are generally comparable to
the range of estimates in other studies using longer-
term historical or paleo data, and/or alternative
moment conservation techniques (Table 1). The
credible intervals are wide, but this is also true of
95% intervals reported elsewhere (Table 1, Rong
et al. 2014; Butler et al. 2016). The uncertainty is
further emphasised by comparing ARIs from differ-
ent studies at the same site (Table 1). For example on
the relatively well studied Alaska megathrust (Kodiak
to Prince William Sound), Wesson et al. (2007,2008)
drew on multi-site Holocene stratigraphy to infer an
ARI 650 years for earthquakes like the 1964 event
(which had Mw9:2), whereas Butler et al. (2016)
estimated Mw9 ARI 1403 (447–9800) years
using an expanded set of paleo data. Recently
Shennan et al. (2018) inferred the 1964 earthquake
was the only event in the last 2000 years to rupture
the entire region, by combining paleoseismic results
at 22 sites. Our model’s results near Alaska are most
similar to Butler et al. (2016), although the Wesson
et al. (2007,2008) ARI’s are within the 95% interval
(Table 1). Wesson et al. (2007,2008) assumed
Mw9 earthquakes never occur further west (Semidi
and Shumagin segments) whereas our model does not
represent such fine-scale variations in seismicity.
Thus if the latter segments are included in the
prescribed region, our modelled Mw9 ARIs con-
tinue to reduce (Table 1).
Table 1also highlights the diversity of ARIs
estimated from moment-conservation type arguments
(including our model). For example in the Tohoku
region of Japan, Kagan and Jackson (2013) estimated
that Mw9 events have ARI 300–400 years by
constraining the parameters of a tapered Gutenberg-
Richter type model with tectonic convergence rates
and earthquake catalogue data. In the same region
Butler et al. (2016) obtained an ARI of 1148
(490–3448) using the ‘regionally scaled global
moment rate’ (RSGR) method, whereas our method
suggests ARIs intermediate between these two esti-
mates (Table 1).
Systematic differences are expected between our
model and the RSGR method used by Butler et al.
(2016). The latter partitions global subduction seis-
micity among source-zones in proportion to their
length and trench-normal convergence rate (Burbidge
et al. 2008). It thus ignores variability in Mw;max,
coupling, and the source-zone width, whereas these
factors are included in our model, albeit with
1536 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Table 1
Magnitude exceedance average-recurrence-interval (ARI, units of years) for sub-regions of some subduction zones
Region MwRange along-strike (lon,lat) ARI (constant l) ARI (depth-varying l) ARI (other) Other category Other comment
Alaska near 1964 earthquake 9 (145:3;59:15) to (153:97;55:26) 1494 (670–11,978) 1319 (594–8071) 1403 (447–9804); B16 P
9 1350 (593–3969); B16 M RSGR method
9? (144:49;59:21) to (153:97;55:26) 1399 (628–11244) 1230 (554–7499) 650; W08 P Events similar to
1964 earthquake
Alaska extended to Shumagin 9 (144:49;59:21) to (162:3;53:21) 770 (339–6916) 707 (314–5074)
Cascadia subduction zone 9 (125:03;40:64) to (127:94;49:61) 1586 (542–1) 2174 (672-1) 1000 (285–5000); R14 MPH Generic b-value,
their Fig. 5
’’ 1000 (222-1); R14 MPH Source-zone
b-value, their
Fig. 5
Chile near 1960 earthquake 8 (75:86;45:62) to (74:57;37:58) 60 (35–138) 59 (35–142) 65 41; B08 H
8.5 ’’ 172 (103–501) 166 (98–505) 128 46; B08 H
8.6 ’’ 216 (131–677) 206 (122–678) 292 93; M18 P Magnitude 8.6 is
a lower bound
9 597 (350–5889) 532 (313–4593) 873 (283–6803); B16 P
9 950 (421–2778); B16 M RSGR method
9? 350; C17 P Mean time between
4 events which
may be similar
to 1960 earthquake
Japan near 2011 earthquake 9 (144.28,40.09) to (142.28,35.02) 649 (366–1730) 552 (315–1501) 300–400; K13 MH
9 1157 (405–7143); B16 P
9 1148 (490–3438); B16 M RSGR method
Kamchatka near 1952 earthquake 9 (162,52.48) to (157.23,48.8) 832 (465–2643) 717 (409–2205) 1203 (506–4950); B16 P
’’ 1305 (554–3921); B16 M RSGR method
Nankai offshore of Japan 8 (138.19,34.01) to (132.43,30.81) 130 (66–481) 163 (87—577) 124 93; B08 H
8.6 ’’ 651 (268–1) 1085 (380–164005) 661; B08 H
Sumatra near 2004 earthquake 9 (96.12,1.57) to (92.68,14.45) 758 (353–5054) 728 (331–2910) 1653 (617–9174); B16 P
9 1470 (662–4254); B16 M RSGR method
9? 500; R13 P Mean time between
3 events with similar
extent to 2004
earthquake
ARIs for the constant land depth-varying lmodels are compared with other published ARIs. The ‘Other category’ column indicates the published ARI methodology, involving paleo-data
(P), long-term historical data (H), and/or moment-conservation arguments (M). ARIs in parenthesis are 95% credible intervals. Where credible intervals were not provided, the uncertainty is
reported following the original study (but is not a 95% credible interval). Magnitudes followed by ‘?’ were inferred when no Mwwas provided; in these cases paleo events were identified as
being ‘similar’ to some historical event as described in the comment. Reference abbreviations are: B08 (Burbidge et al. 2008), B16 (Butler et al. 2016), C17 (Cisternas et al. 2017), K13
(Kagan and Jackson 2013), M18 (Moernaut et al. 2018), R13 (Rajendran 2013), R14 (Rong et al. 2014), W08 (Wesson et al. 2008)
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1537
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
significant uncertainty (e.g. Fig. 3). Therefore com-
pared with our model the RSGR method should
predict more frequent earthquakes on narrow source-
zones with apparently low coupling (e.g. Marianas
trench), and conversely on wide source-zones with
apparently high coupling (e.g. South America). This
seems consistent with predictions of each model near
Chile, Japan, Kamchatka and Sumatra (Table 1).
However the relationship does not extend to the wide
Alaskan megathrust, where both approaches give
similar results (Table 1). This is because the Alaskan
segment of our Alaska–Aleutian source-zone does
not feature GCMT thrust earthquakes with
Mw[7:15, causing the Bayesian weight update to
prefer moderately low coupling (prior mean
c¼0:75; posterior mean c¼0:35). The effect is
moderated by the 50% weight assigned to the
unsegmented Alaska-Aleutians source which exhibits
more GCMT seismicity (prior mean c¼0:64; pos-
terior mean c¼0:49). Nonetheless the lower overall
coupling in our model effectively offsets the high
width on the Alaskan megathrust, ultimately leading
to results similar to the RSGR method.
Considering our model makes limited use of site-
specific long-term data (to facilitate global applica-
tion), our ARIs are not assumed to be more accurate
than others in Table 1at any particular site. However
the fact that our results are comparable to a range of
other studies, many of which employ much longer
term observational data, gives some support to the
method.
3.5. Maximum-Stage Exceedance-Rates at Hazard
Points
For each earthquake-tsunami scenario, the max-
imum-stage (i.e. maximum simulated water-level
above ambient sea level) is used to describe the
tsunami size at any given location. In practice the
tsunami model time-series are not stored at every grid
cell due to file storage limitations; instead they are
stored at a set of around 20,000 locations (termed
‘hazard points’) which are globally distributed but
have much higher density near Australia (Davies and
Griffin 2018). For each hazard point the tsunami
maximum-stage exceedance-rates are derived from
the earthquake magnitude-frequency models as
described below.
First consider a single unsegmented source-zone
(or an individual segment of a segmented source-
zone). If the earthquake-slip and rigidity models are
specified then each individual earthquake-tsunami
scenario eon the source has an associated family of
occurrence-rates re;iwhich are derived by partitioning
each logic-tree GRicurve among all scenarios on the
source (Eqs. 9,12). Each GRiand re;ialso have an
associated Bayesian posterior weight wi(Sects. 3.2
and 3.3). For a given hazard point pand scenario e,
denote the tsunami maximum-stage as ge;p. Then for
each GRithere is an associated maximum-stage
exceedance-rate curve at p, given by:
!p
iðxÞ¼X
e
re;ige;p[xÞ
¼X
e
GRiMw;eD
2

GRiMw;eþD
2

PrðejMw¼Mw;eÞI ðge;p[xÞ
ð16Þ
where the second step simply expands re;iusing Eq. 9
and is useful below. Here !p
iðxÞgives the rate (av-
erage number of events per year) of earthquake-
tsunamis with maximum-stage greater than xat point
p, assuming that GRiis correct. The indicator func-
tion Þ is defined to be unity if its argument is true,
and zero otherwise. Equation 16 leads to a family of
maximum-stage exceedance-rate curves at each haz-
ard point for each unsegmented source-zone (or
individual segment). To summarise the results it is
natural to define the logic-tree-mean maximum-stage
exceedance-rate curve !pðxÞ(events/year) as:
!pðxÞ¼X
i
wi!p
iðxÞ
¼X
e
GRðMw;eD
2ÞGRðMw;eþD
2Þ

PrðejMw¼Mw;eÞI ðge;p[xÞ
ð17Þ
where GR is the posterior-mean magnitude excee-
dance-rate curve over all logic-tree branches. The
second step follows from Eq. 16 and highlights that
1538 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
individual !p
iðxÞdo not need to be calculated. This
facilitates efficient computation, and is a benefit of
the scenario conditional probability model being
independent of i(Sect. 3.2). This logic-tree-mean
maximum-stage exceedance-rate can be straightfor-
wardly generalised to multiple source-zones by
summation of their !pðxÞvalues. On source-zones
with segmentation 50% weight is placed on the
union-of-segments interpretation, and the remainder
on the unsegmented interpretation (as was done for
the magnitude exceedance-rates).
Percentile maximum-stage exceedance-rate
curves are also useful to indicate the uncertainty in
the tsunami hazard (Power et al. 2017). For example,
consider the 84th percentile maximum-stage excee-
dance-rate at point p, denoted !p;84ðxÞwith units
(events/year) which is a function of the desired
maximum-stage x. For the case of an unsegmented
source-zone or single segment, !p;84 ðxÞis defined as
the smallest number such that:
X
i
wiI!p
iðxÞ!p;84ðxÞ

84=100 ð18Þ
For a given maximum-stage x, this implies 84% of
the logic-tree weight is assigned to maximum-stage
exceedance-rates !p;84ðxÞ.Eq.18 directly gener-
alises to other percentiles in the open interval
(0, 100). Note a different definition was used in
Davies and Griffin (2018) for computational expedi-
ence; in comparison Eq. 18 is more rigorous but
relatively expensive to compute because all !p
iðxÞare
required. Although Eq. 18 is used herein, the impact
on our percentile uncertainty calculations is small; for
instance the 84th percentile results in Sect. 4.2 differ
from those of Davies and Griffin (2018) by less than
5% at 90% of hazard points.
The generalisation of exceedance-rate percentiles
(Eq. 18) to multiple source-zones is more complex
than for the logic-tree-mean (Eq. 17) because it
depends on assumptions about the dependence of
uncertainties between source-zones; we defer full
discussion of this to Sect. 4.2. However note that for
a possibly segmented source-zone, the maximum-
stage exceedance-rate for a given percentile and
maximum-stage is computed by summing the results
of Eq. 18 on each individual segment (which pre-
vents cancellation of uncertainties due to
segmentation, and is consistent with the co-mono-
tonic treatment discussed in Sect. 4.2). Given a 50%
weight on both the ‘union-of-segments’ and ’unseg-
mented’ interpretations, the combined distribution of
exceedance-rates for a given maximum-stage is a
50:50 mixture of the ‘union-of-segments’ and ‘un-
segmented’ exceedance-rate distributions. The latter
two distributions can be computed individually from
their percentiles; it is then straightforward to derive
the full mixture distribution and compute any desired
percentiles directly.
4. Results
4.1. Sensitivity of Offshore Hazard to the Chosen
Slip and Rigidity Model
The sensitivity analysis focusses on the ARI = 500
year tsunami maximum-stage at sites offshore of
Australia (Fig. 8a). To compute this, for every
source-zone the logic-tree-mean maximum-stage
exceedance-rate (Eq. 17) is calculated at 100 maxi-
mum-stage values logarithmically spaced from 0.02
to 20 m; then for each maximum-stage the excee-
dance-rates are summed over all source-zones, and
finally the ‘ARI = 500 year maximum-stage’ is
defined as the maximum-stage that results in this
summed exceedance-rate being 1/500, interpolating
as required. Irrespective of the chosen slip or rigidity
model, wave shoaling over the continental shelf leads
to strong shore-normal gradients in tsunami size
(Fig. 8a). This complicates interpretation at the
continental scale, and so for subsequent analysis the
results are normalised to 100 m depth using Green’s
law (i.e. multiplied by ðdepth=100Þ1=4). Normalisa-
tion greatly reduces the depth-dependence of the
results and emphasises regional patterns in the
offshore tsunami size (Fig. 8b).
For each combination of earthquake slip and
rigidity model, the normalised ARI = 500 maximum-
stage is depicted in Fig. 9. Comparison of models
with constant l(top row) and depth-varying l
(bottom row) indicates the choice of rigidity model
has a minor effect on the results when the earthquake-
slip model is fixed (Fig. 9). Switching between
rigidity models typically leads to point-wise changes
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1539
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
of a few percent, and while the difference varies from
site to site it is always less than 10%.
The choice of earthquake slip model has a more
substantial impact on the results (Fig. 9). The FAUS
model produces smaller ARI = 500 tsunamis than
both the VAUS and HS models (Fig. 9); for example
the FAUS/VAUS ARI = 500 maximum-stage ratio is
around 0.67 (median over all sites), with 90% of sites
in (0.6–0.84) irrespective of the rigidity. Differences
between the VAUS and HS models are much smaller
Figure 8
aMaximum-stage at ARI = 500 years, using the HS model with constant lfor illustration. bThe same data normalised to a depth of 100 m
using Green’s law (i.e. multiplied by ðdepth=100Þ0:25), to reduce the effect of depth variations
Figure 9
Maximum-stage at ARI = 500 years with Green’s law normalisation to 100 m depth, for all slip and rigidity models
1540 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
(Fig. 9), with the VAUS/HS ARI=500 maximum-
stage ratio 0:91 (median over all sites), with 90%
of sites in (0.85–0.96).
These differences reflect both the varying capac-
ity of the earthquake slip models to produce large
tsunamis, and our application of bias-adjustment via a
non-uniform partition of the parent FAUS scenario
occurrence-rates (Sect. 3.2). Recall bias-adjustment
was not applied to the FAUS model because its
scenarios have little variability by construction, so
even though they poorly represent some observed
tsunamis (Fig. 2b) no improvement can be obtained
by preferentially weighting some subset of scenarios.
Conversely, both the HS and VAUS models produce
more variable scenarios that are amenable to bias
adjustment. For the VAUS model this resulted in a
strong preference for compact, high-slip scenarios
(Fig. 2d), which tend to produce larger tsunamis than
uniform-slip scenarios with similar magnitude but
low or median slip. This is the key driver of
differences between the VAUS and FAUS hazard
results (Fig. 9). While the HS model was subject to a
much smaller bias-adjustment, the similarity of the
HS and VAUS results reflects that HS scenarios can
simulate slip asperities directly without recourse to
compact rupture area. Considering the VAUS model
completely ignores earthquake slip heterogeneity, it
is remarkable that the difference with a heteroge-
neous-slip model is only around 10% once a
preference for compact ruptures is accounted for
(Fig. 9).
4.2. Sensitivity of Offshore Hazard to Epistemic
Uncertainty in the Magnitude-Frequency
Distributions
Uncertainties in PTHA also result from the
uncertain frequency of large-magnitude earthquakes
(e.g. Fig. 3). It is important to understand the relative
significance of this compared with the choice of slip
and rigidity model, to help guide future improve-
ments to PTHA methodologies (Sepu
´lveda et al.
2019). A complication arises because the site-specific
hazard is often affected by multiple source-zones.
Thus we must determine whether the uncertain
earthquake frequencies on these source-zones are
independent, or exhibit some kind of epistemic-
uncertainty dependence. For example, the frequency
of Mw[9 earthquakes on the Kermadec-Tonga
trench is highly uncertain due to poor constraints on
Mw;max (Fig. 3), but if future research demonstrated
that Mw;max [9 on the Tonga segment, we ask if this
would influence our belief that Mw;max [9 elsewhere
(e.g. on the nearby Kermadec segment, or other
source-zones)? If the answer to such questions is
always ‘no’ then the uncertainties are independent,
and otherwise they are dependent.
Dependence does not affect the mean hazard
(because the mean of the sum of random variables is
always equal to the sum of their means), but does
affect percentile uncertainty calculations. If two
source-zones show positive dependence in the mag-
nitude-frequency epistemic-uncertainty, then the
hazard uncertainty will increase at coastal sites
affected by both. Although it is unclear how to best
specify inter-source-zone epistemic-uncertainty
dependence, in many situations independence seems
unlikely. Source-zone parameters such as Mw;max ,
coupling coefficients and GR b-values are often
hypothesised to be related to other physical properties
of the source-zone (e.g. McCaffrey 1997; Scholz and
Campos 2012; Nishikawa and Ide 2014; Bilek and
Lay 2018), and if correct such theories imply
epistemic-uncertainty dependence, because source-
zones with similar properties will deviate similarly
from the model (assuming those properties are not
already accounted for). For example it has been
hypothesised that subduction-zones with high down-
dip curvature are less likely to host large earthquakes
due to heterogeneities in shear strength (Bletery et al.
2016). This may or may not be correct, but if true it
suggests the relatively high-curvature Solomon, New
Hebrides, Kermadec-Tonga, Philippines, Marianas
and Scotia subduction zones may host high-magni-
tude earthquakes infrequently compared with our
model (which does not explicitly consider curvature).
This possibility implies some epistemic-uncertainty
dependence between these source-zones. A conflict-
ing hypothesis is that Mw;max is limited only by the
source-zone size (McCaffrey 2008). This may or may
not be correct, but if correct implies the true
frequency of high magnitude earthquakes will be
high relative to the model on most source-zones (i.e.
wherever the model places significant weight on
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1541
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
smaller Mw;max values). The key point is that our
model may have structural errors. This suggests some
inter-source-zone epistemic-uncertainty dependence,
albeit difficult to specify.
To robustly account for the unknown epistemic-
uncertainty dependence structure, herein the multi-
source-zone maximum-stage exceedance-rate per-
centile curves are computed assuming every source-
zone simultaneously attains the same percentile
(Fig. 10). This is termed ‘co-monotonic’ dependence
(Deelstra et al. 2009). For instance the 16th-per-
centile panel in Fig. 10 is computed assuming that at
each hazard point, the 16th percentile maximum-
stage exceedance-rate curve is ‘true’ for all source-
zones simultaneously, so for any maximum-stage
these exceedance-rates may be summed to derive the
multi-source-zone exceedance-rate (and similarly for
the other percentiles). Co-monotonicity is widely
used to robustly model dependence in economic
theories of decision under risk and uncertainty
(Deelstra et al. 2009) and prevents uncertainties on
multiple source-zones from partially cancelling, as
would occur under independence. Because most
coastal sites are significantly affected by only a few
source-zones, the site-specific maximum-stage excee-
dance-rate uncertainties are not reliant on the global
correctness of the co-monotonic assumption, but
rather that it describes the dependence of locally
significant source-zones (Davies et al. 2017). The co-
monotonic approach bypasses the need to fully
describe this dependence structure, albeit at the
expense of some conservatism.
Compared with the choice of slip and rigidity
model, epistemic uncertainty in the magnitude-fre-
quency distributions leads to large uncertainty in the
ARI = 500 maximum-stage (Fig. 10). Taking the HS
model with constant lfor illustration, the 16th, 50th
and 84th percentile values in Fig. 10 are respectively
50%, 87% and 126% of the mean discussed earlier
(top right-panel of Fig. 9). The latter ratios vary
spatially, but mostly within 10%. The dominance
of magnitude-frequency related uncertainties in our
model is qualitatively consistent with results of recent
tsunami hazard assessments in the South China Sea
(Li et al. 2016; Sepu
´lveda et al. 2019). They found
the maximum-stage at ARIs 100–1000 years
changed by well over 100% due to the choice of
magnitude-frequency model, as compared with
changes 20 60% due to the earthquake slip
representation.
4.3. Global Scale Results
Although our study is focussed on Australia, some
results were stored globally to facilitate model testing
and interpretation (Fig. 11). At the global level the
model suggests large waves are most likely around
major subduction zones. The South America and
Kurils-Japan subduction-zones are particularly
prominent because they are wide, converging rela-
tively rapidly, definitely have Mw;max [9, and their
GCMT earthquake history leads the model to favour
reasonably high coupling (posterior mean c0:77 in
both cases). The model also predicts substantial
Figure 10
Percentile uncertainties in maximum-stage at ARI = 500 years with Green’s law normalisation to 100 m depth, due to epistemic-uncertainty in
magnitude-frequency distributions. Results use the HS model with constant l
1542 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
penetration of large waves into the central Pacific
Ocean (Fig. 11). Noting that both the 1946 Aleutian
and 1960 Chile earthquakes led to large far-field
tsunami runup at central Pacific sites such as Hawaii,
the Marquesas and Easter Island (NGDC 2018), this
result seems qualitatively reasonable. Compared with
the global results the hazard in Australia is moderate
overall, except on the northwest coast which is
directly exposed to tsunamis generated on the eastern
Sunda Arc (Fig. 11).
It is illuminating to assess the FAUS, VAUS and
HS model return periods corresponding to larger
offshore waves observed near Japan during the 2011
Tohoku tsunami. During this event two GPS gauges
near Iwate in Japan recorded maximum-stage values
exceeding 6 m. They were located about 12 km
offshore in 200 m depth and separated by 40 km
(termed ‘Iwate M’ and ‘Iwate S’ in Satake et al.
2013, see the latter study for locations and observed
time-series). Our nearest model point falls between
these gauges but slightly further offshore (395 m
depth). If the wave shoaling follows Green’s law then
the corresponding maximum-stage is around 5.1 m at
the model point, which has a modelled ARI of 970
years (HS), 1208 years (VAUS), and never occurs
with the FAUS model (ARI ¼1). This further
highlights the potential for FAUS scenarios to
underestimate offshore wave-heights, and the ten-
dency for comparable results to be obtained using
either the (bias-adjusted) VAUS or HS scenarios.
5. Conclusions
The framework in this paper facilitates a consis-
tent treatment of earthquake-scenario rates for PTHA
using different slip and rigidity models, while main-
taining reasonable consistency with the historical
earthquake record and tectonic constraints. It features
a number of improvements relative to previous
approaches for large-scale PTHA (e.g. Power et al.
2017; Davies et al. 2017):
The edge-effect adjustment leads to a better match
between modelled time-integrated slip rates and
spatial variations in tectonic convergence
Earthquake catalogue data is more efficiently used
to control the logic-tree weights, in a manner that
also accounts for the choice of rigidity model.
Non-uniform scenario weights are used to partially
offset biases in the earthquake slip models.
This provides a suitable basis to study the sensitivity
of PTHA results to the choice of earthquake slip and
rigidity model.
Figure 11
Maximum-stage (m) at ARI = 500 years, normalised to 100 m depth using a Green’s law factor ððdepth=100Þ0:25Þto reduce the effect of depth
variations. The HS model with constant lwas used
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1543
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Within our framework the choice of rigidity
model has a small effect on the offshore hazard.
Although shallow low rigidity zones may permit
surprisingly large tsunamis for their magnitude, tec-
tonic constraints imply such events should occur less
often than higher-rigidity events with similar mag-
nitude and lower slip. A similar trade-off was
identified by Scala et al. (2019) who used a com-
pletely different ‘slip-amplification’ approach to
represent the effect of depth-varying rigidity on
earthquake slip. Our ‘magnitude-relabelling’
approach implies the rigidity model affects the hazard
by changing our interpretation of the integrated slip
released by historical earthquakes (which in turn
affects the logic-tree weights for the magnitude-fre-
quency distributions). Compared with slip-
amplification (Scala et al. 2019), the magnitude-re-
labelling approach has the practical advantage of not
requiring any re-computation of tsunami waveforms.
However it is not obvious whether one-or-other
approach should be preferred on theoretical grounds,
or the extent to which their results differ. This should
be considered in future research.
Our results confirm that the representation of
earthquake slip has a significant effect on PTHA,
even in the far-field (Li et al. 2016). This is signifi-
cant because PTHAs often employ FAUS-like
uniform-slip scenarios with magnitude-dependent
length and width based on a scaling relation (e.g.
Løvholt et al. 2014; Roshan et al. 2016; Davies et al.
2017; Kalligeris et al. 2017). In our study some his-
torical tsunamis were poorly represented with the
FAUS approach, and it predicted significantly lower
tsunami hazard in Australia than the other approa-
ches. Thus we suggest the FAUS approach should not
be used, even in the far-field. While the HS and
VAUS scenarios showed better performance, there
was a clear tendency for good-fitting VAUS scenarios
to be ‘compact’ relative to the scaling-relation pre-
dictions. Once accounted for via bias-adjustment, the
VAUS and HS scenarios produced similar ARI=500
maximum-stage estimates for Australia. This sug-
gests both HS and VAUS can usefully represent
earthquake-generated tsunamis for PTHA. For some
hazard modelling applications the use of both sce-
nario types could be beneficial to represent epistemic
uncertainties in tsunami generation. In other cases the
fewer degrees-of-freedom of VAUS scenarios may be
exploited to reduce computational effort, as noted by
An et al. (2018) in the context of tsunami early
warning.
Although the representation of earthquake size
and slip is important for producing realistic tsunami
wave-forms, in this study the largest source of
uncertainty remains the earthquake magnitude-fre-
quency relations. Future work may reduce these
uncertainties by more efficiently using paleo-seismic
and long-term historical observations, coupled with
appropriate treatment of the data uncertainties (Grif-
fin et al. 2018). Refinements of the model’s structure
should simultaneously be considered (e.g. allowing
for non-Poissonion event times, Geist 2014; Moer-
naut et al. 2018) and are likely to become more
important when additional data is used to constrain
the model. In addition, the development of more
refined representations of inter-source-zone epis-
temic-uncertainty dependence may allow the tsunami
hazard uncertainty to be reduced at some sites
affected by multiple source-zones, as compared with
the co-monotonic treatment applied herein.
Acknowledgements
This paper is published with the permission of the
CEO of Geoscience Australia. This project was
undertaken with the assistance of resources and
services from the National Computational Infrastruc-
ture (NCI), which is supported by the Australian
Government. Comments from Hadi Ghasemi and two
anonymous reviewers improved the paper. The
tsunami scenario database and source-code used to
construct it are freely available, and may be accessed
following instructions at https://github.com/
GeoscienceAustralia/ptha/tree/master/ptha_access.
Open Access This article is distributed under the terms of the
Creative Commons Attribution 4.0 International License (http://
creativecommons.org/licenses/by/4.0/), which permits unrestricted
use, distribution, and reproduction in any medium, provided you
give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons license, and indicate if
changes were made.
1544 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Publisher’s Note Springer Nature remains neutral
with regard to jurisdictional claims in published maps
and institutional affiliations.
REFERENCES
Allgeyer, S., & Cummins, P. (2014). Numerical tsunami simulation
including elastic loading and seawater density stratification.
Geophysical Research Letters,41(7), 2368–2375. https://doi.org/
10.1002/2014GL059348.
An, C., Liu, H., Ren, Z., & Yuan, Y. (2018). Prediction of tsunami
waves by uniform slip models. Journal of Geophysical Research:
Oceans.https://doi.org/10.1029/2018jc014363.
Annaka, T., Satake, K., Sakakiyama, T., Yanagisawa, K., & Shuto,
N. (2007). Logic-tree approach for probabilistic tsunami hazard
analysis and its applications to the Japanese Coasts. Pure and
Applied Geophysics,164, 577–592. https://doi.org/10.1007/
s00024-006-0174-3.
Baba, T., Allgeyer, S., Hossen, J., Cummins, P. R., Tsushima, H.,
Imai, K., et al. (2017). Accurate numerical simulation of the far-
field tsunami caused by the 2011 Tohoku earthquake, including
the effects of boussinesq dispersion, seawater density stratifica-
tion, elastic loading, and gravitational potential change. Ocean
Modelling,111, 46–54. https://doi.org/10.1016/j.ocemod.2017.
01.002.
Ben-Menahem, A., & Rosenman, M. (1972). Amplitude patterns of
tsunami waves from submarine earthquakes. Journal of Geo-
physical Research (1896–1977),77(17), 3097–3128. https://doi.
org/10.1029/JB077i017p03097.
Berryman, K., Wallace, L., Hayes, G., Bird, P., Wang, K., Basili,
R., Lay, T., Pagani, M., Stein, R., Sagiya, T., Rubin, C., Bar-
reintos, S., Kreemer, C., Litchfield, N., Stirling, M., Gledhill, K.,
Haller, K., & Costa, C. (2015). The GEM Faulted Earth Sub-
duction Interface Characterisation Project: Version 2.0 - April
2015. Tech. rep., GEM.
Bilek, S. L., & Lay, T. (1999). Rigidity variations with depth along
interplate megathrust faults in subduction zones. Nature,400,
443–446.
Bilek, S.L., & Lay, T. (2018). Subduction zone megathrust earth-
quakes. Geosphere.https://doi.org/10.1130/GES01608.1
Bird, P. (2003). An updated digital model of plate boundaries.
Geochemistry Geophysics Geosystems,4(3), 1–52.
Bird, P., & Kagan, Y. Y. (2004). Plate-tectonic analysis of shallow
seismicity: Apparent boundary width, beta, corner magnitude,
coupled lithosphere thickness, and coupling. Bulletin of the
Seismological Society of America,94(6), 2380–2399.
Bird, P., & Liu, Z. (2007). Seismic hazard inferred from tectonics:
California. Seismological Research Letters,78(1), 37–48.
Bletery, Q., Thomas, A. M., Rempel, A. W., Karlstrom, L., Sladen,
A., & De Barros, L. (2016). Mega-earthquakes rupture flat
megathrusts. Science,354(6315), 1027–1031. https://doi.org/10.
1126/science.aag0482.
Bommer, J. J., & Scherbaum, F. (2008). The use and misuse of
logic trees in probabilistic seismic hazard analysis. Earthquake
Spectra,24(4), 997–1009. https://doi.org/10.1193/1.2977755.
Burbidge, D., Cummins, P., Mleczko, R., & Thio, H. (2008). A
probabilistic tsunami hazard assessment for Western Australia.
Pure and Applied Geophysics,165, 2059–2088. https://doi.org/
10.1007/s00024-008-0421-x.
Butler, R., Frazer, L. N., & Templeton, W. J. (2016). Bayesian
probabilities for Mw 9.0?earthquakes in the Aleutian Islands
from a regionally scaled global rate. Journal of Geophysical
Research: Solid Earth.https://doi.org/10.1002/2016JB012861
Butler, R., Walsh, D., & Richards, K. (2017). Extreme tsunami
inundation in Hawai‘i from Aleutian–Alaska subduction zone
earthquakes. Natural Hazards,85(3), 1591–1619. https://doi.org/
10.1007/s11069-016-2650-0.
Cisternas, M., Garrett, E., Wesson, R., Dura, T., & Ely, L. (2017).
Unusual geologic evidence of coeval seismic shaking and tsu-
namis shows variability in earthquake size and recurrence in the
area of the giant 1960 Chile earthquake. Marine Geology,385,
101–113. https://doi.org/10.1016/j.margeo.2016.12.007.
Davies, G. (2019). Tsunami variability from uncalibrated stochastic
earthquake models: Tests against deep ocean observations
2006–2016. Geophysical Journal International,218(3),
1939–1960. https://doi.org/10.1093/gji/ggz260.
Davies, G., & Griffin, J. (2018). The 2018 Austrailan probabilistic
tsunami hazard assessment: Hazards from earthquake generated
tsunamis. Tech. rep., Geoscience Australia Record 2018/41.
https://doi.org/10.11636/Record.2018.041
Davies, G., Horspool, N., & Miller, V. (2015). Tsunami inundation
from heterogeneous earthquake slip distributions: Evaluation of
synthetic source models. Journal of Geophysical Research: Solid
Earth,120(9), 6431–6451. https://doi.org/10.1002/2015JB012272.
Davies, G., Griffin, J., Løvholt, F., Glimsdal, S., Harbitz, C., Thio,
H. K., et al. (2017). A global probabilistic tsunami hazard
assessment from earthquake sources. Geological Society, Lon-
don, Special Publications,. https://doi.org/10.1144/sp456.5.
Deelstra, G., Dhaene, J., & Vanmaele, M. (2009). An overview of
comonotonicity and its applications in finance and insurance. In:
Advanced Mathematical Methods for Finance, Springer, New
York.
Dziewonski, A. M., & Anderson, D. L. (1981). Preliminary refer-
ence earth model. Physics of the Earth and Planetary Interiors,
25, 297–356.
Ekstrom, G., Nettles, M., & Dziewonski, A. (2012). The global
CMT project 2004–2010: Centroid-moment tensors for 13,017
earthquakes. Physics of the Earth and Planetary Interiors,
200–201, 1–9. https://doi.org/10.1016/j.pepi.2012.04.002.
Engdahl, E., & Villasenor, A.(2002). Global seismicity:
1900–1999. International Handbook of Earthquake and Engi-
neering Seismology 81A.
Fritz, H. M., & Borrero, J. C. (2006). Somalia field survey after the
December 2004 Indian Ocean Tsunami. Earthquake Spectra,
22(S3), 219–233. https://doi.org/10.1193/1.2201972.
Fujii, Y., & Satake, K. (2013). Slip distribution and seismic
moment of the 2010 and 1960 Chilean Earthquakes inferred from
tsunami waveforms and coastal geodetic data. Pure and Applied
Geophysics,170, 1493–1509. https://doi.org/10.1007/s00024-
012-0524-2.
Fukutani, Y., Suppasri, A., & Imamura, F. (2018). Quantitative
assessment of epistemic uncertainties in tsunami hazard effects
on building risk assessments. Geosciences.https://doi.org/10.
3390/geosciences8010017.
Garwood, F. (1936). Fiducial limits for the poisson distribution.
Biometrika,28(3–4), 437–442. https://doi.org/10.1093/biomet/
28.3-4.437.
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1545
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Geist, E. (2002). Complex earthquake rupture and local tsunamis.
Journal of Geophysical Research.https://doi.org/10.1029/
2000JB000139.
Geist, E., & Bilek, S. (2001). Effect of depth-dependent shear
modulus on tsunami generation along subduction zones. Geo-
physical Research Letters,28(7), 1315–1318.
Geist, E., & Parsons, T. (2016). Reconstruction of far-field tsunami
amplitude distributions from earthquake sources. Pure and
Applied Geophysics,. https://doi.org/10.1007/s00024-016-1288-x.
Geist, E. L. (2014). Explanation of temporal clustering of tsunami
sources using the epidemic-type aftershock sequence model.
Bulletin of the Seismological Society of America,104(4),
2091–2103. https://doi.org/10.1785/0120130275.
Gica, E., Teng, M. H., Liu, P. L. F., Titov, V., & Zhou, H. (2007).
Sensitivity analysis of source parameters for earthquake-gener-
ated distant tsunamis. Journal of Waterway, Port, Coastal, and
Ocean Engineering,133(6), 429–441. https://doi.org/10.1061/
(ASCE)0733-950X(2007)133:6(429).
Glimsdal, S., Pedersen, G., Harbitz, C., & Løvholt, F. (2013).
Dispersion of tsunamis: does it really matter? Natural Hazards
and Earth System Sciences,13, 1507–1526. https://doi.org/10.
5194/nhess-13-1507-2013.
Gonzalez, F. I., Geist, E. L., Jaffe, B., Kanoglu, U., Mofjeld, H.,
Synolakis, C. E., et al. (2009). Probabilistic tsunami hazard
assessment at Seaside, Oregon, for near- and far-field seismic
sources. Journal of Geophysical Research,114(C11023), 1–19.
https://doi.org/10.1029/2008JC005132.
Grezio, A., Marzocchi, W., Sandri, L., & Gasparini, P. (2010).
A Bayesian procedure for probabilistic tsunami hazard assess-
ment. Natural Hazards,53(1), 159–174. https://doi.org/10.1007/
s11069-009-9418-8.
Grezio, A., Babeyko, A., Baptista, M. A., Behrens, J., Costa, A.,
Davies, G., et al. (2017). Probabilistic tsunami hazard analysis:
Multiple sources and global applications. Reviews of Geophysics,
55(4), 1158–1198. https://doi.org/10.1002/2017RG000579,
2017RG000579.
Griffin, J., & Davies, G. (2018). Earthquake sources of the Aus-
tralian plate margin: Revised models for the 2018 national
tsunami and earthquake hazard assessments. Tech. rep., Geo-
science Australia Professional Opinion 2018/xx.
Griffin, J., Nguyen, N., Cummins, P., & Cipta, A. (2018). Historical
earthquakes of the Eastern Sunda Arc: Source mechanisms and
intensity-based testing of Indonesia’s National seismic hazard
assessment historical earthquakes of the Eastern Sunda Arc.
Bulletin of the Seismological Society of America,109(1), 43–65.
https://doi.org/10.1785/0120180085.
Hayes, G. P., & Furlong, K. P. (2010). Quantifying potential tsu-
nami hazard in the Puysegur subduction zone, south of New
Zealand. Geophysical Journal International,183, 1512–1524.
Hayes, G. P., Wald, D. J., & Johnson, R. L. (2012). Slab1.0: A
three-dimensional model of global subduction zone geometries.
Journal of Geophysical Research.https://doi.org/10.1029/
2011JB008524
Hayes, G. P., Moore, G. L., Portner, D. E., Hearne, M., Flamme,
H., Furtney, M., et al. (2018). Slab2, a comprehensive subduction
zone geometry model. Science,. https://doi.org/10.1126/science.
aat4723.
He
´bert, H., Burg, P., Binet, R., Lavigne, F., Allgeyer, S., &
Schindele
´, F. (2012). The 2006 July 17 Java (Indonesia) tsunami
from satellite imagery and numerical modelling: A single or
complex source? Geophysical Journal International,191(3),
1255–1271. https://doi.org/10.1111/j.1365-246X.2012.05666.x.
Ho, T. C., Satake, K., Watada, S., & Fujii, Y. (2019). Source
estimate for the 1960 Chile earthquake from joint inversion of
geodetic and transoceanic tsunami data. Journal of Geophysical
Research: Solid Earth,124(3), 2812–2828. https://doi.org/10.
1029/2018JB016996.
Horspool, N., Pranantyo, I., Griffin, J., Latief, H., Natawidjaja, D.
H., Kongko, W., et al. (2014). A probabilistic tsunami hazard
assessment for Indonesia. Natural Hazards and Earth System
Sciences,14, 3105–3122. https://doi.org/10.5194/nhessd-2-3423-
2014.
Kagan, Y. Y. (2002). Seismic moment distribution revisited: 1
statistical results. Geophysical Journal International,148,
520–541.
Kagan, Y. Y., & Jackson, D. D. (2013). Tohoku earthquake: A
surprise? Bulletin of the Seismological Society of America,103,
1181–1194. https://doi.org/10.1785/0120120110.
Kajiura, K. (1963). The leading wave of a tsunami. Bulletin of the
Earthquake Research Institute,41, 535–571.
Kalligeris, N., Montoya, L., Ayca, A., & Lynett, P. (2017). An
approach for estimating the largest probable tsunami from far-
field subduction zone earthquakes. Natural Hazards,89, 233.
https://doi.org/10.1007/s11069-017-2961-9.
Kanamori, H. (1977). The energy release in great earthquakes.
Journal of Geophysical Research,82(20), 2981–2987.
Koulali, A., Tregoning, P., McClusky, S., Stanaway, R., Wallace,
L., & Lister, G. (2015). New Insights into the present-day
kinematics of the central and western Papua New Guinea from
GPS. Geophysical Journal International,202(2), 993–1004.
https://doi.org/10.1093/gji/ggv200.
Koulali, A., Susilo, S., McClusky, S., Meilano, I., Cummins, P.,
Tregoning, P., et al. (2016). Crustal strain partitioning and the
associated earthquake hazard in the eastern Sunda-Banda Arc.
Geophysical Research Letters,43(5), 1943–1949. https://doi.org/
10.1002/2016GL067941,2016GL067941.
Lane, E. M., Gillibrand, P. A., Wang, X., & Power, W. (2012). A
probabilistic tsunami hazard study of the Auckland Region, Part
II: Inundation modelling and hazard assessment. Pure and
Applied Geophysics,170(9–10), 1635–1646. https://doi.org/10.
1007/s00024-012-0538-9.
Lay, T. (2018). A review of the rupture characteristics of the 2011
Tohoku-oki Mw 9.1 earthquake. Tectonophysics,733, 4–36.
https://doi.org/10.1016/j.tecto.2017.09.022.
Li, L., Switzer, A. D., Chan, C. H., Wang, Y., Weiss, R., & Qiu, Q.
(2016). How heterogeneous coseismic slip affects regional
probabilistic tsunami hazard assessment: A case study in the
South China Sea. Journal of Geophysical Research: Solid Earth,
121(8), 6250–6272. https://doi.org/10.1002/2016JB013111,
2016JB013111.
Lorito, S., Piatanesi, A., & Lomax, A. (2008). Rupture process of
the 18 April 1906 California earthquake from near-field tsunami
waveform inversion. Bulletin of the Seismological Society of
America,98, 832–845. https://doi.org/10.1785/0120060412.
Lorito, S., Romano, F., & Lay, T. (2015a). Tsunamigenic major
and great earthquakes (2004–2013): Source processes inverted
from seismic, geodetic, and sea-level data. Encyclopedia of
Complexity and Systems Science.https://doi.org/10.1007/978-3-
642-27737-5_641-1.
Lorito, S., Selva, J., Basili, R., Romano, F., Tiberti, M., & Piata-
nesi, A. (2015b). Probabilistic hazard for seismically induced
1546 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
tsunamis: Accuracy and feasibility of inundation maps. Geo-
physical Journal International,200, 574–588. https://doi.org/10.
1093/gji/ggu408.
Løvholt, F., Glimsdal, S., Harbitz, C., Horspool, N., Smebye, H., de
Bono, A., et al. (2014). Global tsunami hazard and exposure due to
large co-seismic slip. International Journal of Disaster Risk
Reduction,10, 406–418. https://doi.org/10.1016/j.ijdrr.2014.04.003.
McCaffrey, R. (1997). Influences of recurrence times and fault
zone temperatures on the age-rate dependence of subduction
zone seismicity. Journal of Geophysical Research: Solid Earth,
102(B10), 22839–22854. https://doi.org/10.1029/97JB01827.
McCaffrey, R. (2008). Global frequency of magnitude 9 earthquakes.
Geology,36(3), 263–266. https://doi.org/10.1130/G24402A.1.
Meade, B. J., & Loveless, J. P. (2009). Block modeling with
connected fault-network geometries and a linear elastic coupling
estimator in spherical coordinates. Bulletin of the Seismological
Society of America,99(6), 3124–3139. https://doi.org/10.1785/
0120090088.
Moernaut, J., Van Daele, M., Fontijn, K., Heirman, K., Kempf, P.,
Pino, M., et al. (2018). Larger earthquakes recur more periodi-
cally: New insights in the megathrust earthquake cycle from
lacustrine turbidite records in south-central Chile. Earth and
Planetary Science Letters,481, 9–19. https://doi.org/10.1016/j.
epsl.2017.10.016.
Moreno, M., Bolte, J., Klotz, J., & Melnick, D. (2009). Impact of
megathrust geometry on inversion of coseismic slip from
geodetic data: Application to the 1960 Chile earthquake. Geo-
physical Research Letters,36, L16310. https://doi.org/10.1029/
2009GL039276.
Mori, N., Mai, P. M., Goda, K., & Yasuda, T. (2017). Tsunami
inundation variability from stochastic rupture scenarios: Appli-
cation to multiple inversions of the 2011 Tohoku, Japan
earthquake. Coastal Engineering,127, 88–105. https://doi.org/
10.1016/j.coastaleng.2017.06.013.
Mueller, C., Power, W., Fraser, S., & Wang, X. (2015). Effects of
rupture complexity on local tsunami inundation: Implications for
probabilistic tsunami hazard assessment by example. Journal of
Geophysical Research (Solid Earth),120, 488–502. https://doi.
org/10.1002/2014JB011301.
Newman, A. V., Feng, L., Fritz, H. M., Lifton, Z. M., Kalligeris, N.,
& Wei, Y. (2011a). The energetic 2010 M W7.1 Solomon Islands
tsunami earthquake. Geophysical Journal International,186,
775–781. https://doi.org/10.1111/j.1365-246X.2011.05057.x.
Newman, A. V., Hayes, G., Wei, Y., & Convers, J. (2011b). The 25
October 2010 Mentawai tsunami earthquake, from real-time
discriminants, finite-fault rupture, and tsunami excitation. Geo-
physical Research Letters,38(L05302), 1–7. https://doi.org/10.
1029/2010GL046498.
NGDC (2018) National Geophysical Data Center/World Data
Service Global Historical Tsunami Database. 10.7289/
V5PN93H7. https://www.ngdc.noaa.gov/hazard/tsu_db.shtml.
Last accessed 24 Sept 2015.
Nishikawa, T., & Ide, S. (2014). Earthquake size distribution in
subduction zones linked to slab buoyancy. Nature Geoscience,7,
904–908. https://doi.org/10.1038/ngeo2279.
Okada, Y. (1985). Surface deformation due to shear and tensile
faults in a half-space. Bulletin of the Seismological Society of
America,75(4), 1135–1154.
Okal, E. A. (2011). Tsunamigenic earthquakes: Past and present
milestones. Pure and Applied Geophysics,168, 969–995. https://
doi.org/10.1007/s00024-010-0215-9.
Okal, E. A., & Synolakis, C. E. (2008). Far-field tsunami hazard
from mega-thrust earthquakes in the Indian Ocean. Geophysical
Journal International,172, 995–1015. https://doi.org/10.1111/j.
1365-246X.2007.03674.x.
Okal, E. A., Synolakis, C. E., Fryer, G. J., Heinrich, P., Borrero, J.
C., Ruscher, C., et al. (2002). A field survey of the 1946 Aleutian
tsunami in the far field. Seismological Research Letters,73(4),
490–503. https://doi.org/10.1785/gssrl.73.4.490.
Parsons, T., & Geist, E. L. (2009). Tsunami probability in the
Caribbean Region. Pure and Applied Geophysics,165,
2089–2116. https://doi.org/10.1007/s00024-008-0416-7.
Poisson, B., Oliveros, C., & Pedreros, R. (2011). Is there a best
source model of the Sumatra 2004 earthquake for simulating the
consecutive tsunami? Geophysical Journal International,185,
1365–1378. https://doi.org/10.1111/j.1365-246X.2011.05009.x.
Power, W., Wang, X., Wallace, L., Clark, K., & Mueller, C. (2017).
The New Zealand Probabilistic Tsunami Hazard Model: devel-
opment and implementation of a methodology for estimating
tsunami hazard nationwide. Geological Society, London, Special
Publications.https://doi.org/10.1144/SP456.6.
Prendergast, A., & Brown, N. (2012). Far-field impact and coastal
sedimentation associated with the 2006 Java tsunami in West
Australia. Natural Hazards,60, 69–79. https://doi.org/10.1007/
s11069-011-9953-y.
Rajendran, K. (2013). On the recurrence of great subduction zone
earthquakes. Current Science,104(7), 880–892.
Romano, F., Piatanesi, A., Lorito, S., Tolomei, C., Atzori, S., &
Murphy, S. (2016). Optimal time alignment of tide-gauge tsu-
nami waveforms in nonlinear inversions: Application to the 2015
Illapel (Chile) earthquake. Geophysical Research Letters,43(21),
11,226–11,235. https://doi.org/10.1002/
2016GL071310,2016GL071310.
Rong, Y., Jackson, D. D., Magistrale, H., & Goldfinger, C. (2014).
Magnitude limits of subduction zone earthquakes. Bulletin of the
Seismological Society of America,104(5), 2359–2377. https://
doi.org/10.1785/0120130287.
Roshan, A. D., Basu, P. C., & Jangid, R. S. (2016). Tsunami hazard
assessment of Indian coast. Natural Hazards,82(2), 733–762.
https://doi.org/10.1007/s11069-016-2216-1.
Ruiz, J. A., Fuentes, M., Riquelme, S., Campos, J., & Cisternas, A.
(2015). Numerical simulation of tsunami runup in northern Chile
based on non-uniform k2slip distributions. Natural Hazards.
Satake, K., Fujii, Y., Harada, T., & Namegaya, Y. (2013). Time
and space distribution of coseismic slip of the 2011 Tohoku
Earthquake as inferred from tsunami waveform data. Bulletin of
the Seismological Society of America,103, 1473–1492. https://
doi.org/10.1785/0120120122.
Scala, A., Lorito, S., Romano, F., Murphy, S., Selva, J., Basili, R.,
et al. (2019). Effect of shallow slip amplification uncertainty on
probabilistic tsunami hazard analysis in subduction zones: Use of
long-term balanced stochastic slip models. Pure and Applied
Geophysics,. https://doi.org/10.1007/s00024-019-02260-x.
Scholz, C.H., & Campos, J. (2012). The seismic coupling of sub-
duction zones revisited. Journal of Geophysical Research.
https://doi.org/10.1029/2011JB009003
Selva, J., Tonini, R., Molinari, I., Tiberti, M., Romano, F., Grezio,
A., et al. (2016). Quantification of source uncertainties in Seis-
mic Probabilistic Tsunami Hazard Analysis (SPTHA).
Geophysical Journal International,205, 1780–1803. https://doi.
org/10.1093/gji/ggw107.
Vol. 177, (2020) Sensitivity of Probabilistic Tsunami Hazard Assessment 1547
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Sepu
´lveda, I., Liu, P. L. F., & Grigoriu, M. (2019). Probabilistic
Tsunami Hazard Assessment in South China Sea with consid-
eration of uncertain earthquake characteristics. Journal of
Geophysical Research: Solid Earth.https://doi.org/10.1029/
2018JB016620.
Shennan, I., Brader, M. D., Barlow, N. L., Davies, F. P., Longley,
C., & Tunstall, N. (2018). Late Holocene paleoseismology of
Shuyak Island, Alaska. Quaternary Science Reviews,201,
380–395. https://doi.org/10.1016/j.quascirev.2018.10.028.
Stirling, M., & Gerstenberger, M. (2018). Applicability of the
Gutenberg–Richter relation for major active faults in New
Zealand. Bulletin of the Seismological Society of America,
108(2), 718–728. https://doi.org/10.1785/0120160257.
Storchak, D., Giacomo, D. D., Bondar, I., Harris, J., Engdahl, E.,
Lee, W., Villasenor, A., Bormann, P., & Ferrari, G. (2012). ISC-
GEM global instrumental earthquake catalogue (1900–2009):
GEM Technical Report 2012-01. Tech. rep., GEM. https://doi.
org/10.13117/GEM.GEGD.TR2012.01.
Strasser, F., Arango, M., & Bommer, J. J. (2010). Scaling of the
source dimensions of interface and intraslab subduction-zone
earthquakes with moment magnitude. Seismological Research
Letters,81(6), 941–950. https://doi.org/10.1785/gssrl.81.6.941.
Thio, H. K., Somerville, P., & Ichinose, G. (2007). Probabilistic
analysis of strong ground motion and tsunami hazards in
Southeast Asia. In: Proceedings from 2007 NUS-TMSI Work-
shop, National University of Singapore.
Volpe, M., Lorito, S., Selva, J., Tonini, R., Romano, F., & Brizuela,
B. (2019). From regional to local SPTHA: Efficient computation
of probabilistic tsunami inundation maps addressing near-field
sources. Natural Hazards and Earth System Sciences,19(3),
455–469. https://doi.org/10.5194/nhess-19-455-2019.
Watada, S., Kusumoto, S., & Satake, K. (2014). Traveltime delay
and initial phase reversal of distant tsunamis coupled with the
self-gravitating elastic earth. Journal of Geophysical Research:
Solid Earth,119(5), 4287–4310. https://doi.org/10.1002/
2013jb010841.
Weatherall, P., Marks, K. M., Jakobsson, M., Schmitt, T., Tani, S.,
Arndt, J. E., et al. (2015). A new digital bathymetric model of the
world’s oceans. Earth and Space Science,2(8), 331–345. https://
doi.org/10.1002/2015EA000107.
Wesson, R. L., Boyd, O. S., Mueller, C. S., Bufe, C. G., Frankel, A.
D., & Petersen, M. D. (2007). Revision of Time-Independent
Probabilistic Seismic Hazard Maps for Alaska, Open-File Report
2007–1043. United States Geological Survey: Tech. rep.
Wesson, R. L., Boyd, O. S., Mueller, C. S., & Frankel, A. D.
(2008). Challenges in making a seismic hazard map for Alaska
and the Aleutians. In: Freymueller, J. (ed), Active Tectonics and
Seismic Potential of Alaska, American Geophysical Union.
https://doi.org/10.1029/179GM22
Whiteway, T. (2009). Australian Bathymetry and Topography
Grid, June 2009. Tech. rep., Geoscience Australia Record
2009/21.
Zo
¨ller, G. (2013). Convergence of the frequency-magnitude dis-
tribution of global earthquakes: Maybe in 200 years. Geophysical
Research Letters,40(15), 3873–3877. https://doi.org/10.1002/grl.
50779.
Zo
¨ller, G. (2017). Comment on ‘‘Estimation of Earthquake Hazard
Parameters from Incomplete Data Files. Part III. Incorporation of
Uncertainty of Earthquake-Occurrence Model’’ by Andrzej
Kijko, Ansie Smit, and Markvard A. Sellevoll. Bulletin of the
Seismological Society of America, 107(4):1975. https://doi.org/
10.1785/0120160193.
(Received June 5, 2019, revised July 31, 2019, accepted August 3, 2019, Published online August 9, 2019)
1548 G. Davies and J. Griffin Pure Appl. Geophys.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”),
for small-scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are
maintained. By accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use
(“Terms”). For these purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or
a personal subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or
a personal subscription (to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the
Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data
internally within ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking,
analysis and reporting. We will not otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of
companies unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that
Users may not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to
circumvent access control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil
liability, or is otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by
Springer Nature in writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer
Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates
revenue, royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain.
Springer Nature journal content cannot be used for inter-library loans and librarians may not upload Springer Nature journal
content on a large scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any
information or content on this website and may remove it or features or functionality at our sole discretion, at any time with or
without notice. Springer Nature may revoke this licence to you at any time and remove access to any copies of the Springer Nature
journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express
or implied with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or
warranties imposed by law, including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be
licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other
manner not expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com
... Assume the offshore PTHA represents hazard uncertainties via multiple scenario-frequency models i ∈ I, where I is the set of all alternative scenario-frequency models. For example, grey curves in Figure 1C show alternative scenario-frequency models i ∈ I for one source-zone (Puysegur) used by Davies & Griffin (2020). These were assigned probabilities ω i and converted into occurrence-rates for every scenario, with an approach that promotes consistency with earthquake catalogues and spatially variable tectonic convergence rates (details in Davies & Griffin, 2020). ...
... For example, grey curves in Figure 1C show alternative scenario-frequency models i ∈ I for one source-zone (Puysegur) used by Davies & Griffin (2020). These were assigned probabilities ω i and converted into occurrence-rates for every scenario, with an approach that promotes consistency with earthquake catalogues and spatially variable tectonic convergence rates (details in Davies & Griffin, 2020). For each scenario-frequency model the hazard can be quantified with exceedance-rate curves λ i : λ i (Q > Q T ) = e∈E r i (e)1 (Q(e)>Q T ) (1) ...
... The exceedance-rate uncertainty (i.e. variation with i ∈ I) may be summarised using the mean and percentiles as in Figure 1D (see also Power et al., 2017;Davies & Griffin, 2020;Basili et al., 2021). The 'all scenarios' approach is common for offshore PTHA, but is rarely practical for onshore hazard assessment because it requires too many inundation simulations. ...
Article
Full-text available
Offshore probabilistic tsunami hazard assessments (PTHAs) are increasingly available for earthquake generated tsunamis. They provide standardized representations of tsunami scenarios, their uncertain occurrence-rates, and models of the deep ocean waveforms. To quantify onshore hazards it is natural to combine this information with a site-specific inundation model, but this is computationally challenging to do accurately, especially if accounting for uncertainties in the offshore PTHA. This study reviews an efficient Monte Carlo method recently proposed to solve this problem. The efficiency comes from preferential sampling of scenarios that are likely important near the site of interest, using a user-defined importance measure derived from the offshore PTHA. The theory of importance sampling enables this to be done without biasing the final results. Techniques are presented to help design and test Monte Carlo schemes for a site of interest (before inundation modelling) and to quantify errors in the final results (after inundation modelling). The methods are illustrated with examples from studies in Tongatapu and Western Australia.
... Recent offshore PTHAs represent hypothetical earthquake-tsunamis with a large set of scenarios E, containing on the order of 10 5 −10 7 individual scenarios e ∈ E (Davies & Griffin 2020;Basili et al. 2021;Tonini et al. 2021). For each scenario the tsunami is simulated using computationally cheap models that are accurate in deep-water far from the coast, but inaccurate near the coast due to coarse resolution and neglect of non-linearity (e.g. ...
... Scenariofrequencies are represented using a family of models I which collectively represent epistemic uncertainties in tsunami occurrence-rates, with each model i ∈ I assigning a long-term average rate r i (e) (events/year) to every scenario (e.g. Power et al. 2012;Davies & Griffin 2020;Basili et al. 2021). For any quantity of interest Q, such as the tsunami maxima at a particular site, this leads to a family of exceedance-rate curves describing the hazard and uncertainties. ...
... 2008;Romano et al. 2021)a n d palaeotsunami deposits are suggestive of older events Poweret al. 2012;Lamarche et al. 2015;Goffet al. 2020). However, Tongatapu is also exposed to other tsunami sources including far-field earthquakes (Davies & Griffin 2020), outer-rise earthquakes (Lay et al. 2010), more complex local earthquakes (Okal et al. 2011), landslides andvolcanoes (Frohlich et al. 2009;Gof f2011;Lavigne et al. 2021;Duncombe 2022). These are not treated herein and so our hazard results are incomplete, albeit sufficient for our primary purpose of illustrating Monte Carlo techniques for offshore-to-onshore PTHA. ...
Article
Full-text available
Offshore Probabilistic Tsunami Hazard Assessments (offshore PTHAs) provide large-scale analyses of earthquake-tsunami frequencies and uncertainties in the deep ocean, but do not provide high-resolution onshore tsunami hazard information as required for many risk-management applications. To understand the implications of an offshore PTHA for the onshore hazard at any site, in principle the tsunami inundation should be simulated locally for every earthquake scenario in the offshore PTHA. In practice this is rarely feasible due to the computational expense of inundation models, and the large number of scenarios in offshore PTHAs. Monte-Carlo methods offer a practical and rigorous alternative for approximating the onshore hazard, using a random subset of scenarios. The resulting Monte-Carlo errors can be quantified and controlled, enabling high-resolution onshore PTHAs to be implemented at a fraction of the computational cost. This study develops efficient Monte-Carlo approaches for offshore-to-onshore PTHA. Modelled offshore PTHA wave heights are used to preferentially sample scenarios that have large offshore waves near an onshore site of interest. By appropriately weighting the scenarios, the Monte-Carlo errors are reduced without introducing bias. The techniques are demonstrated in a high-resolution onshore PTHA for the island of Tongatapu in Tonga, using the 2018 Australian Probabilistic Tsunami Hazard Assessment as the offshore PTHA, while considering only thrust earthquake sources on the Kermadec-Tonga trench. The efficiency improvements are equivalent to using 4-18 times more random scenarios, as compared with stratified-sampling by magnitude, which is commonly used for onshore PTHA. The greatest efficiency improvements are for rare, large tsunamis, and for calculations that represent epistemic uncertainties in the tsunami hazard. To facilitate the control of Monte-Carlo errors in practical applications, this study also provides analytical techniques for estimating the errors both before and after inundation simulations are conducted. Before inundation simulation, this enables a proposed Monte-Carlo sampling scheme to be checked, and potentially improved, at minimal computational cost. After inundation simulation, it enables the remaining Monte-Carlo errors to be quantified at onshore sites, without additional inundation simulations. In combination these techniques enable offshore PTHAs to be rigorously transformed into onshore PTHAs, with quantification of epistemic uncertainties, while controlling Monte-Carlo errors.
... The slip on the Walanae/Selayar posterior is slightly smaller, with a maximum probability estimate close to 8 m rather than 10 m, and a slightly less positive bias towards larger slip. This tendency towards an unexpectedly large slip was noted in Ringer et al. ( 2021 ) for the 1852 Banda Sea earthquake in Eastern Indonesia w here the Bay esian technique used here was first introduced, and is likely a by-product of using a uniform homogeneous slip distribution (Geist 2002 ;Davies 2019 ;Melgar et al. 2019 ;Davies & Griffin 2020 ). Future studies will consider the potential discretization effects and selection of hyperparameters including the potential non-uniformity of the slip distribution in the forward model that could lead to a preference for smaller rectangular area, large slip ruptures. ...
Article
Full-text available
Using a Bayesian approach we compare anecdotal tsunami runup observations from the 29 December 1820 Flores Sea earthquake with close to 200,000 tsunami simulations to determine the most probable earthquake parameters causing the tsunami. Using a dual hypothesis of the source earthquake either originating from the Flores Thrust or the Walanae/Selayar Fault, we found that neither source perfectly matches the observational data, particularly while satisfying seismic constraints of the region. Instead both posteriors have shifted to the edge of the prior indicating that the actual earthquake may have run along both faults.
... We carried out tsunami modeling by identifying gaps in the distribution of seismicity (Fig. 2a), which were then used to define two possible rupture segments, each with a maximum moment magnitude of Mw 8.9 (Fig. 2d), which is consistent with a return period of ~ 400 years (Okal 2012;Harris et al. 2019;Widiyantoro et al. 2020). The western segment has a trench-parallel extent of 325 km, width of 120 km, and a homogenous slip of 24 m, while the eastern segment is 442 km long, width of 109 km, with a homogeneous slip of 20 m (Fig. S6); in both cases, we assume a shear rigidity of 30 Gpa (Davies and Griffin 2020). We also take into account the possible backthrust fault in the south of West Java based on the distribution of seismicity (Fig. 3) and a previous study (Sirait et al. 2020), which may amplify the potential tsunami height along the coast (Heidarzadeh 2011). ...
Article
Full-text available
High seismicity rates in and around West Java and Sumatra occur as a result of the Indo-Australian plate converging with and subducting beneath the Sunda plate. Large megathrust events associated with this process likely pose a major earthquake and tsunami hazard to the surrounding community, but further effort is required to help understand both the likelihood and frequency of such events. With this in mind, we exploit catalog seismic data sourced from the Agency for Meteorology, Climatology, and Geophysics (BMKG) of Indonesia and the International Seismological Centre (ISC) for the period April 2009 through to July 2020, in order to conduct earthquake hypocenter relocation using a teleseismic double-difference method. Our results reveal a large seismic gap to the south of West Java and southeast Sumatra, which is in agreement with a previous GPS study that finds the region to be a potential future source of megathrust earthquakes. To investigate this further, tsunami modeling was conducted in the region for two scenarios based on the estimated seismicity gaps and the existence of a backthrust fault. We show that the maximum tsunami height could be up to 34 m along the west coast of southernmost Sumatra and along the south coast of Java near the Ujung Kulon Peninsula. This estimate is comparable with the maximum tsunami height predicted by a previous study of southern Java in which earthquake sources were derived from the inversion of GPS data. However, the present study extends the analysis to southeast Sumatra and demonstrates that estimating rupture from seismic gaps can lead to reliable tsunami hazard assessment in the absence of GPS data.
... Uniform slip models may be able to predict the tsunami fairly well as shown in this and other previous studies (e.g., An et al., 2018;Greenslade et al., 2011). However, the earthquake source complexity has an important role in uncertainty associated with near-field tsunami forecast, especially for great earthquakes with magnitudes larger than Mw 8.0 as indicated in previous studies (e.g., Davies & Griffin, 2020;Melgar et al., 2019;Mueller et al., 2021;Williamson et al., 2019). They found that homogenous slip models have frequently under estimated the peak tsunami amplitudes and the resulting tsunami hazard. ...
Article
Full-text available
Abstract A tsunamigenic earthquake with thrust faulting mechanism occurred southeast of the Loyalty Islands, New Caledonia, in the Southern Vanuatu subduction zone on the 10th of February 2021. The tsunami was observed at coastal gauges in the surrounding islands and in New Zealand. The tsunami was also recorded at a new DART network designed to enhance the tsunami forecasting capability of the Southwestern Pacific. We used the tsunami waveforms in an inversion to estimate the fault slip distribution. The estimated major slip region is located near the trench with maximum slip of 4 m. This source model with an assumed rupture velocity of 1.0 km/s can reproduce the observed seismic waves. We evaluated two tsunami forecasting approaches for coastal regions in New Zealand: selecting a pre‐computed scenario, and interpolating between two pre‐computed scenarios. For the evaluation, we made a reference map of tsunami threat levels in New Zealand using the estimated source model. The results show that the threat level maps from the pre‐computed Mw 7.7 scenario located closest to the epicenter, and from an interpolation of two scenarios, match the reference threat levels in most coastal regions. Further improvements to enhance the system toward more robust warnings include expansion of scenario database and incorporation of tsunami observation around tsunami source regions. We also report on utilization of the coastal gauge and DART station data for updating forecasts in real‐time during the event and discuss the differences between the rapid‐response forecast and post‐event retrospective forecasts.
... Advances in PTHA incorporate the anticipated uncertainty associated with seismic occurrence and rupture characteristics of future megathrust events [29][30][31][32][33][34] (Fig. 4). PTHA considers a comprehensive range of uncertainties in estimates of earthquake occurrence and rupture characteristics on tsunami waves 29 . ...
Article
Full-text available
Earthquake-triggered giant tsunamis can cause catastrophic disasters to coastal populations, ecosystems and infrastructure on scales over thousands of kilometres. In particular, the scale and tragedy of the 2004 Indian Ocean (about 230,000 fatalities) and 2011 Japan (22,000 fatalities) tsunamis prompted global action to mitigate the impacts of future disasters. In this Review, we summarize progress in understanding tsunami generation, propagation and monitoring, with a particular focus on developments in rapid early warning and long-term hazard assessment. Dense arrays of ocean-bottom pressure gauges in offshore regions provide real-time data of incoming tsunami wave heights, which, combined with advances in numerical and analogue modelling, have enabled the development of rapid tsunami forecasts for near-shore regions (within 3 minutes of an earthquake in Japan). Such early warning is essential to give local communities time to evacuate and save lives. However, long-term assessments and mitigation of tsunami risk from probabilistic tsunami hazard analysis are also needed so that comprehensive disaster prevention planning and structural tsunami countermeasures can be implemented by governments, authorities and local populations. Future work should focus on improving tsunami inundation, damage risk and evacuation modelling, and on reducing the uncertainties of probabilistic tsunami hazard analysis associated with the unpredictable nature of megathrust earthquake occurrence and rupture characteristics. The scale and tragedy of the giant tsunamis in 2004, 2010 and 2011 led to a revolution in tsunami monitoring. This Review assesses the advances in tsunami observation, monitoring and hazard assessment, which have allowed near-real-time early warning systems to be developed. The scale and tragedy of the 2004 Indian Ocean Tsunami and the 2011 Tohoku Tsunami prompted the widespread deployment of tsunami observation networks and the development of tsunami modelling, which have enabled tsunami early warning systems to approach near-real-time inundation forecasts, based on the dense arrays of offshore observation data.Earthquake magnitude alone does not characterize the size or impact of the ensuing tsunami disaster. The tsunami source (such as earthquake location and rupture characteristics), coastal geomorphic features, and exposure of densely populated areas have key roles in tsunami behaviour, inundation extent and the level of impact.Probabilistic tsunami hazard assessment (PTHA) is a recently developed method of considering the variability of tsunami conditions for risk mitigation. PTHA can be used in engineering design and to draw up tsunami inundation maps at different return period levels, which can be used to plan local and regional hazard mitigation.To mitigate future tsunami risks, we must be able to reproduce the inundation depth and flow velocity of tsunamis that run up to urban areas. A combination of numerical and physical models is needed to better understand the complex interactions between building layouts, structures, debris and non-hydrostatic flow.Long-term tsunami assessments will inform authorities about requirements for software and hardware countermeasures. Hardware or structural measures (such as sea walls) can reduce loss of life and assets during an event, whereas software or non-structural measures (such as evaluation, assessments and planning) can reduce loss of life. The scale and tragedy of the 2004 Indian Ocean Tsunami and the 2011 Tohoku Tsunami prompted the widespread deployment of tsunami observation networks and the development of tsunami modelling, which have enabled tsunami early warning systems to approach near-real-time inundation forecasts, based on the dense arrays of offshore observation data. Earthquake magnitude alone does not characterize the size or impact of the ensuing tsunami disaster. The tsunami source (such as earthquake location and rupture characteristics), coastal geomorphic features, and exposure of densely populated areas have key roles in tsunami behaviour, inundation extent and the level of impact. Probabilistic tsunami hazard assessment (PTHA) is a recently developed method of considering the variability of tsunami conditions for risk mitigation. PTHA can be used in engineering design and to draw up tsunami inundation maps at different return period levels, which can be used to plan local and regional hazard mitigation. To mitigate future tsunami risks, we must be able to reproduce the inundation depth and flow velocity of tsunamis that run up to urban areas. A combination of numerical and physical models is needed to better understand the complex interactions between building layouts, structures, debris and non-hydrostatic flow. Long-term tsunami assessments will inform authorities about requirements for software and hardware countermeasures. Hardware or structural measures (such as sea walls) can reduce loss of life and assets during an event, whereas software or non-structural measures (such as evaluation, assessments and planning) can reduce loss of life.
Article
Full-text available
The 1992 September 1 Nicaragua tsunami manifested itself with an initial shoreline recession, resulting in a fundamental change in approach to define the initial waveform of tsunamis from a solitary wave to an N-wave. Here, we first fit N-wave profile to seafloor deformation for a large set of earthquake scenarios, assuming that the seafloor deformation resulting from an earthquake instantaneously transfers to the sea surface. Then, relating the N-wave parameters to the earthquake source parameters, we express the initial tsunami profile in terms of the earthquake source parameters. Further, we calculate the maximum tsunami runup through earthquake source parameters and test our results against field runup measurements for several events, observing good agreement.
Article
Full-text available
The complexity of coseismic slip distributions influences the tsunami hazard posed by local and, to a certain extent, distant tsunami sources. Large slip concentrated in shallow patches was observed in recent tsunamigenic earthquakes, possibly due to dynamic amplification near the free surface, variable frictional conditions or other factors. We propose a method for incorporating enhanced shallow slip for subduction earthquakes while preventing systematic slip excess at shallow depths over one or more seismic cycles. The method uses the classic k⁻² stochastic slip distributions, augmented by shallow slip amplification. It is necessary for deep events with lower slip to occur more often than shallow ones with amplified slip to balance the long-term cumulative slip. We evaluate the impact of this approach on tsunami hazard in the central and eastern Mediterranean Sea adopting a realistic 3D geometry for three subduction zones, by using it to model ~ 150,000 earthquakes with \(M_{w}\) from 6.0 to 9.0. We combine earthquake rates, depth-dependent slip distributions, tsunami modeling, and epistemic uncertainty through an ensemble modeling technique. We found that the mean hazard curves obtained with our method show enhanced probabilities for larger inundation heights as compared to the curves derived from depth-independent slip distributions. Our approach is completely general and can be applied to any subduction zone in the world.
Article
Full-text available
This study tests three models for generating stochastic earthquake-tsunami scenarios on subduction zones by comparison with deep ocean observations from 18 tsunamis during the period 2006-2016. It focusses on the capacity of uncalibrated models to generate a realistic distribution of hypothetical tsunamis, assuming the earthquake location, magnitude and subduction interface geometry are approximately known, while details of the rupture area and slip distribution are unknown. Modelling problems like this arise in tsunami hazard assessment, and when using historical and paleo-tsunami observations to study pre-instrumental earthquakes. Tsunamis show significant variability depending on their parent earthquake’s properties, and it is important that this is realistically represented in stochastic tsunami scenarios. To clarify which aspects of earthquake variability should be represented, three scenario generation approaches with increasing complexity are tested: a simple fixed-area-uniform-slip model with earthquake area and slip deterministically related to moment magnitude; a variable-area-uniform-slip model which accounts for earthquake area variability; and a heterogeneous-slip model which accounts for both earthquake area variability and slip heterogeneity. The models are tested using deep-ocean tsunami time-series from 18 events (2006-2016) with moment magnitude Mw > 7.7. For each model and each observed event a ‘corresponding family of model scenarios’ is generated which includes stochastic scenarios with earthquake location and magnitude similar to the observation, with no additional calibration. For an ideal model (which perfectly characterises the variability of tsunamis) the 18 observed events should appear like a random sample of size 18, created by taking one draw from each of the 18 ‘corresponding family of model scenarios’. This idea forms the basis of statistical techniques to test the models. Firstly a goodness-of-fit criterion is developed to identify stochastic scenarios ‘most similar’ to the observed tsunamis, and assess the capacity of different models to produce good-fitting scenarios. Both the heterogeneous-slip and variable-area-uniform-slip models show similar capacity to generate tsunamis matching observations, while the fixed-area-uniform-slip model performs much more poorly in some cases. Secondly the observed tsunami stage ranges are tested for consistency with the null hypothesis that they were randomly generated by the model. The null hypothesis cannot be rejected for the heterogeneous-slip model, whereas both uniform-slip models exhibit a statistically significant tendency to produce small tsunamis too often. Finally the statistical properties of slip for stochastic earthquake scenarios are compared against earthquake scenarios that best fit the observations. For the variable-area-uniform-slip models the best-fitting model scenarios have higher slip on average than the stochastic scenarios, highlighting biases in this model. The techniques developed in this study can be applied to test stochastic tsunami scenario generation techniques, identify and partially correct their biases, and provide better justification for their use in applications.
Article
Full-text available
Site-specific seismic probabilistic tsunami hazard analysis (SPTHA) is a computationally demanding task, as it requires, in principle, a huge number of high-resolution numerical simulations for producing probabilistic inundation maps. We implemented an efficient and robust methodology using a filtering procedure to reduce the number of numerical simulations needed while still allowing for a full treatment of aleatory and epistemic uncertainty. Moreover, to avoid biases in tsunami hazard assessment, we developed a strategy to identify and separately treat tsunamis generated by near-field earthquakes. Indeed, the coseismic deformation produced by local earthquakes necessarily affects tsunami intensity, depending on the scenario size, mechanism and position, as coastal uplift or subsidence tends to diminish or increase the tsunami hazard, respectively. Therefore, we proposed two parallel filtering schemes in the far- and the near-field, based on the similarity of offshore tsunamis and hazard curves and on the similarity of the coseismic fields, respectively. This becomes mandatory as offshore tsunami amplitudes can not represent a proxy for the coastal inundation in the case of near-field sources. We applied the method to an illustrative use case at the Milazzo oil refinery (Sicily, Italy). We demonstrate that a blind filtering procedure can not properly account for local sources and would lead to a nonrepresentative selection of important scenarios. For the specific source–target configuration, this results in an overestimation of the tsunami hazard, which turns out to be correlated to dominant coastal uplift. Different settings could produce either the opposite or a mixed behavior along the coastline. However, we show that the effects of the coseismic deformation due to local sources can not be neglected and a suitable correction has to be employed when assessing local-scale SPTHA, irrespective of the specific signs of coastal displacement.
Article
Full-text available
The slip distribution of the 1960 Chile earthquake was estimated using geodetic data, local tsunami data, and newly usable transoceanic tsunami data. The large slips triggered a significant tsunami which was recorded by the tide gauges around the Pacific Ocean. We performed a two‐step inversion to estimate the slip distribution. In the first step, we jointly inverted the tsunami waveforms and local geodetic data to recover the ground and seafloor vertical displacement. The transoceanic tsunami data could not be used for waveform inversions until the wave phase and arrival time discrepancies were recently reconciled by improving the long‐wave theory with the phase correction method. The random arrival time discrepancy due to inaccurate local bathymetry and/or instrumental problems was considered by the optimal time alignment. In the second step, we estimated the slip distribution on the plate interface by inverting the vertical displacement obtained in the first step. Checkerboard tests showed that our method and data can resolve displacement at a spatial resolution of at least ~100 km but cannot estimate the rupture velocity. The result for actual data shows a rupture extended about 800 km with a width of about 150 km and three asperities. The large slips are concentrated in the offshore shallow plate interface. Our results indicate that the central and south patches contribute to the large coastal elevation changes at southern source area and high tsunami waves at far field. The estimated moment ranges 1.3–1.9 × 10²³ Nm (Mw 9.3–9.4) for rake angles of 90–140°.
Article
Full-text available
In this paper, we have conducted a probabilistic tsunami hazard assessment (PTHA) for Hong Kong (China) and Kao Hsiung (Taiwan), considering earthquakes generated in the Manila subduction zone. The new PTHA methodology with the consideration of uncertainties of slip distribution and location of future earthquakes extends the stochastic approach of Sepúlveda et al. (2017). Using sensitivity analyses, we further investigate the uncertainties of probability properties defining the slip distribution, the location, and the occurrence of earthquakes. We demonstrate that Kao Hsiung and Hong Kong would be significantly impacted by tsunamis generated by MW > 8.5 earthquakes in the Manila subduction zone. For instance, a specific MW 9.0 earthquake scenario is capable of producing tsunami amplitudes exceeding 4.0 and 3.5 m in Kao Hsiung and Hong Kong, respectively, with a probability of 50%. Despite the significant tsunami impact, great earthquakes have long mean return periods. As a result, the PTHA shows that Kao Hsiung and Hong Kong are exposed to a relatively small tsunami hazard. For instance, maximum tsunami amplitudes in the assessed locations of Kao Hsiung and Hong Kong exceed 0.32 and 0.18 m, respectively, with a mean return period of 100 years. The inundation hazard in populated areas is small as well, with mean return periods exceeding 1,000 years. Sensitivity analyses demonstrate that the PTHA can be affected by the uncertainties of the probability properties defining the slip distribution, the location, and the occurrence of earthquakes. However, PTHA results are most sensitive to the choice of the earthquake occurrence model.
Technical Report
Full-text available
This report describes the 2018 Probabilistic Tsunami Hazard Assessment for Australia (henceforth PTHA18). The PTHA18 estimates the frequency with which earthquake generated tsunamis of any given size occur in deep waters around the Australian coastline. To do this it simulates hundreds of thousands of possible tsunami scenarios from key earthquake sources in the Pacific and Indian Oceans, and models the frequency with which these occur. To justify the PTHA18 methodologies a significant fraction of the report is devoted to testing the tsunami scenarios against historical observations, and comparing the modelled earthquake rates against alternative estimates. Although these tests provide significant justification for the PTHA18 results, there remain large uncertainties in “how often” tsunamis occur at many sites. This is due to fundamental limitations in present-day scientific knowledge of how often large earthquakes occur.
Technical Report
Full-text available
Located within an intraplate setting, continental Australia has a relatively low rate of seismicity compared with its surrounding plate boundary regions. However, the plate boundaries to the north and east of Australia host significant earthquakes that can impact Australia. Large plate boundary earthquakes have historically generated damaging ground shaking in northern Australia, including Darwin. Large submarine earthquakes have historically generated tsunami impacting the coastline of Australia. Previous studies of tsunami hazard in Australia have focussed on the threat from major subduction zones such as the Sunda and Kermadec Arcs. Although still subject to uncertainty, our understanding of the location, geometry and convergence rates of these subduction zones is established by global tectonic models. Conversely, actively deforming regions in central and eastern Indonesia, the Papua New Guinea region and the Macquarie Ridge region are less well defined, with deformation being more continuous and less easily partitioned onto discrete known structures. A number of recently published geological, geodetic and seismological studies are providing new insights into present-day active tectonics of these regions, providing a basis for updating earthquake source models for earthquake and tsunami hazard assessment. This report details updates to earthquake source models in active tectonic regions along the Australian plate boundary, with a primary focus on regions to the north of Australia, and a subsidiary focus on the Puyesgur-Macquarie Ridge-Hjort plate boundary south of New Zealand. The motivation for updating these source models is threefold: 1. To update regional source models for the 2018 revision of the Australian probabilistic tsunami hazard assessment (PTHA18); 2. To update regional source models for the 2018 revision of the Australian national seismic hazard assessment (NSHA18); and 3. To provide an updated database of earthquake source models for tsunami hazard assessment in central and eastern Indonesia, in support of work funded through the Department of Foreign Affairs and Trade (DFAT) DMInnovation program.
Article
A complete suite of closed analytical expressions is presented for the surface displacements, strains, and tilts due to inclined shear and tensile faults in a half-space for both point and finite rectangular sources. These expressions are particularly compact and free from field singular points which are inherent in the previously stated expressions of certain cases. The expressions derived here represent powerful tools not only for the analysis of static field changes associated with earthquake occurrence but also for the modeling of deformation fields arising from fluid-driven crack sources.
Article
Historical reports of earthquake effects from the period 1681 to 1877 in Java, Bali, and Nusa Tenggara are used to independently test ground-motion predictions in Indonesia’s 2010 and 2017 national probabilistic seismic hazard assessments (PSHAs). Assuming that strong ground motion occurrence follows a Poisson distribution, we cannot reject Indonesia’s current and previous PSHA for key cities in Java at 95% confidence, although the results suggest an incremental improvement in the updated PSHA. The source mechanisms of important historical earthquakes are estimated by undertaking a grid search of source parameters and using ground-motion models (GMMs) and ground motion to intensity conversion equations (GMICEs) to forward model intensity at each intensity data point. Bayesian inference is applied to calculate the distribution of source parameters given the historical intensity data. The results demonstrate that large intraslab earthquakes have been responsible for major earthquake disasters in Java, including an M w ~ 7:4 intraslab earthquake near Jakarta in 1699 and an M w ~ 7:8 event in 1867 in Central Java. The results also highlight the potential for large earthquakes to occur on the Flores thrust, with a cluster of large earthquakes rupturing the Flores thrust in 1815, 1818, and 1820. The results show that large shallow crustal earthquakes (M w > 6) occurred in regions of Java where active faults have not been mapped, highlighting the need for further research to identify these faults for future seismic hazard assessments.We do not find conclusive evidence for the occurrence of large earthquakes on the Java megathrust during the time period of this study; however, because of difficulties using intensity data to discriminate between subduction intraslab and interface sources, we cannot exclude megathrust source models for the 1699 and 1867 events, and note other possible megathrust events in 1757, 1780, and 1851. Electronic Supplement: Tables and figures of intensity datasets used in the analysis.
Article
Conventional tsunami warning systems for local and far-field areas utilize uniform slip models to predict tsunami waves. The reasons are twofold. First, it is challenging to develop accurate finite-fault slip models in a short time after an earthquake. Second, tsunami waves are long waves, and hence, the main feature may be predicted without knowing earthquake rupture details. Still, there have been few studies that quantitatively analyze the errors caused by uniform slip models. In this paper, we evaluate if and how such models may be applied for tsunami warnings. For the 2011 Tohoku, 2014 Iquique, and 2015 Illapel tsunamis, we first construct optimum uniform slip models with minimum tsunami waveform misfit and then compare the synthetic tsunami waves with finite-fault model predictions. Predictions from both type of models match the tsunami data very well, indicating that the prediction errors caused by neglecting slip heterogeneity are insignificant. Further, we derive a common relation between the rupture area and earthquake magnitude. Additionally, the optimum rupture length and width ratio in terms of predicting tsunami waves is determined to be 1 for the three earthquakes. Lastly, we find that moving the uniform slip model to the center of global Centroid Moment Tensor solution produces reasonably small errors in the predicted waveforms. Applying the methodology to three more historic tsunamis shows that uniform slip models can well recover the Deep-ocean Assessment and Reporting of Tsunamis system recordings, but the rupture center can differ from the global Centroid Moment Tensor solution. Our findings can potentially prompt more reliable tsunami warning strategies for future events.