ArticlePDF Available

Earthquake insurance portfolio analysis of wood-frame houses in south-western British Columbia, Canada

Authors:

Abstract and Figures

Earthquake disasters affect many structures and infrastructure simultaneously and collectively, and cause tremendous tangible and intangible loss. In particular, catastrophic earthquakes impose tremendous financial stress on insurers who underwrite earthquake insurance policies in a seismic region, resulting in possible insolvency. This study develops a stochastic net worth model of an insurer undertaking both ordinary risk and catastrophic earthquake risk, and evaluates its solvency and operability under catastrophic seismic risk. The ordinary risk is represented by a geometric Brownian motion process, whereas the catastrophic earthquake risk is characterized by an earthquake-engineering-based seismic loss model. The developed model is applied to hypothetical 4000 wood-frame houses in south-western British Columbia, Canada, to investigate the impact of key insurance portfolio parameters to insurer’s ruin probability and business operability. The analysis results indicate: (i) the physical effects of spatially correlated ground motions and local soil conditions at insured properties are significant; (ii) the insurer’s earthquake risk exposure depends greatly on insurance arrangement (e.g. deductible and cap); and (iii) the maintenance of sufficient initial surplus is critical in keeping insurer’s insolvency potential reasonably low, while volatility of non-catastrophic risk is the key for insurer’s business stability. The results highlight the importance of adequate balance between business stability under normal conditions and solvency under extreme conditions for efficient earthquake risk management. Flexibility for determining an insurance arrangement would be beneficial for insurers to enhance their portfolio performance and to offer more affordable coverage to their clients.
Content may be subject to copyright.
Bull Earthquake Eng (2012) 10:615–643
DOI 10.1007/s10518-011-9296-9
ORIGINAL RESEARCH PAPER
Earthquake insurance portfolio analysis of wood-frame
houses in south-western British Columbia, Canada
Katsuichiro Goda ·Hiromichi Yoshikawa
Received: 7 March 2011 / Accepted: 16 June 2011 / Published online: 2 July 2011
© Springer Science+Business Media B.V. 2011
Abstract Earthquake disasters affect many structures and infrastructure simultaneously
and collectively, and cause tremendous tangible and intangible loss. In particular, catastrophic
earthquakes impose tremendous financial stress on insurers who underwrite earthquake insur-
ance policies in a seismic region, resulting in possible insolvency. This study develops a
stochastic net worth model of an insurer undertaking both ordinary risk and catastrophic
earthquake risk, and evaluates its solvency and operability under catastrophic seismic risk.
The ordinary risk is represented by a geometric Brownian motion process, whereas the
catastrophic earthquake risk is characterized by an earthquake-engineering-based seismic
loss model. The developed model is applied to hypothetical 4000 wood-frame houses in
south-western British Columbia, Canada, to investigate the impact of key insurance port-
folio parameters to insurer’s ruin probability and business operability. The analysis results
indicate: (i) the physical effects of spatially correlated ground motions and local soil condi-
tions at insured properties are significant; (ii) the insurer’s earthquake risk exposure depends
greatly on insurance arrangement (e.g. deductible and cap); and (iii) the maintenance of suffi-
cient initial surplus is critical in keeping insurer’s insolvency potential reasonably low, while
volatility of non-catastrophic risk is the key for insurer’s business stability. The results high-
light the importance of adequate balance between business stability under normal conditions
and solvency under extreme conditions for efficient earthquake risk management. Flexibility
for determining an insurance arrangement would be beneficial for insurers to enhance their
portfolio performance and to offer more affordable coverage to their clients.
Keywords Seismic loss ·Earthquake insurance ·Wood-frame house ·Portfolio analysis ·
Ruin probability ·Stochastic net worth model ·Catastrophic earthquake risk
K. Goda (B
)
Department of Civil Engineering, University of Bristol, Queen’s Building, University Walk,
Bristol BS8 1TR, UK
e-mail: katsu.goda@bristol.ac.uk
H. Yoshikawa
Advanced Research Laboratory, Tokyo City University, Tokyo, Japan
123
616 Bull Earthquake Eng (2012) 10:615–643
1 Introduction
An infrequent large earthquake affects many structures and infrastructure simultaneously
and collectively, and causes tremendous tangible and intangible loss and chaos to urban
cities. Incurred seismic damage costs include loss of life and limb, direct financial loss
to building properties and utility systems, and indirect loss due to the ripple effects
across regional and national economies. A significant feature of catastrophic earthquake
disasters is triggering of a surge of damage and loss from numerous individuals/prop-
erties in urban cities where socioeconomic activities are underpinned by various intri-
cately-connected infrastructure systems. In particular, catastrophic earthquakes impose tre-
mendous financial stress on insurers and reinsurers who underwrite earthquake insur-
ance policies in a seismic region (Kleindorfer and Kunreuther 1999;Grace et al. 2003).
This is because buildings and infrastructure are simultaneously affected by strong ground
motions due to the same earthquake, and spatiotemporally correlated insurance claims
are induced, resulting in possible insolvency of insurance companies. Hence, for seis-
mic risk assessment and insurance portfolio management, it is of particular importance
to capture such physical characteristics adequately in modeling earthquake risk (Dong
and Grossi 2005), as they affect the upper tail of the probability distribution of aggre-
gate seismic loss for a building portfolio significantly (Goda and Hong 2009;Goda et al.
2011).
From overall market viewpoint, earthquake insurance loss does not usually consist of the
major part of natural disaster/catastrophe loss claims on a regular basis (note: it fluctuates
significantly year to year). However, the potential earthquake-related loss can be extremely
high. For instance, two major earthquakes in 2010, M8.8 Chile and M7.0 Darfield
(New Zealand) earthquakes, caused insurance claims of about 8.0 and 4.4 billion U.S. dol-
lars, respectively (Swiss Re 2011), which rank in the top 20 costly natural catastrophes since
1970 (in terms of insurance loss). Moreover, the two most recent devastating events in Christ-
church (New Zealand) and Tohoku (Japan) are expected to reach insurance claims of 12 and
20–30 billion U.S. dollars, respectively. (Note: the insured loss is only a fraction of the total
economic seismic loss.)
Accurate assessment of potential impact of destructive earthquakes and effective imple-
mentation of risk mitigation measures require a decision-support tool for quantitative
seismic loss estimation by taking into account key uncertainty and dependence in earth-
quake occurrence, ground motion intensity, and nonlinear structural behavior. Many meth-
ods and tools have been developed for such purposes (e.g. Chang et al. 2000;Porter
et al. 2001;FEMAandNIBS2003;Onur et al. 2006;Goulet et al. 2007;Ellingwood
et al. 2008;Yucemen et al. 2008;Goda and Hong 2009;Tsubota et al. 2009;Black et
al. 2010;Goda et al. 2011). The suitability/choice of these models/tools depends on the
scope of an investigation: (i) single structure or multiple structures; (ii) available build-
ing inventory information (building/site specific or generic/regional); (iii) single scenario
or multiple scenarios; and (iv) seismic loss variables of interest (e.g. statistics, fractiles
corresponding to specific probability levels, or probability distribution of seismic loss).
For instance, the HAZUS-Earthquake (FEMA and NIBS 2003), which is widely accepted
in the United States, can be used to assess the expected seismic loss of a portfolio of
buildings and infrastructure due to a scenario earthquake, and serves as a critical risk
management tool for both pre-disaster planning and post-disaster relief activities. How-
ever, it cannot evaluate the probability distribution of aggregate seismic loss; such infor-
mation is valuable for evaluating/comparing viable mitigation options (Black et al. 2010)
and for analyzing insurance/reinsurance portfolio under earthquake risk (Dong and Grossi
123
Bull Earthquake Eng (2012) 10:615–643 617
2005;Kleindorfer et al. 2005;Bazzurro and Park 2007). Alternatively, if a few site-specific
structures are of interest, a rigorous assembly-based seismic loss model (Porter et al. 2001)
may be selected. Nonetheless, it requires detailed input information as well as significant
modeling and computational effort, and its implementation becomes impractical when it is
applied to numerous structures. If our focus is aimed at regional seismic loss estimation of
multiple buildings due to numerous earthquake scenarios, some reasonable simplification is
necessary. Moreover, there is an urgent research need to develop rigorous tools/methods for
more complex risk management problems.
This study develops a stochastic model of a net worth of an insurance company that
undertakes both ordinary risk and catastrophic earthquake risk, and evaluates its sol-
vency and operability under catastrophic seismic risk. The ordinary risk, associated with
non-catastrophic insurance business and investment return, is considered to be diversifi-
able through profitable premium collection and efficient portfolio management, whereas
the catastrophic earthquake risk is regarded as non-diversifiable due to its high spatio-
temporal concentration of insurance pay-out. Typically, the non-catastrophic risk can be
represented by a diffusion process (Ren 2005), whereas the catastrophic earthquake risk
can be modeled as a Poisson jump process (Powers and Ren 2003). Alternatively, for
characterizing catastrophic earthquake risk, a suitable earthquake-engineering-based seis-
mic loss model (tools/methods mentioned above) can be used. In reality, the use of
computer models for assessing the extent of seismic loss is inevitable due to lim-
ited availability of empirical seismic loss data, and is the standard practice for quan-
titative insurance portfolio analysis (Dong and Grossi 2005;Kleindorfer et al. 2005;
Kuzak and Larsen 2005). Specifically, the stochastic model of an insurer’s net worth
consists of a geometric Brownian motion and a regional seismic loss model of exist-
ing wood-frame houses in south-western British Columbia, Canada (Goda et al. 2011).
The latter earthquake-engineering-based model incorporates up-to-date seismic hazard
information (Goda et al. 2010;Atkinson and Goda 2011), spatial correlation mod-
els of peak ground motions and response spectra for crustal and subduction earth-
quakes (Goda and Hong 2008;Goda and Atkinson 2009,2010), and seismic vulner-
ability models of conventional wood-frame houses in south-western British Columbia
(Goda and Atkinson 2011). The developed net worth model is then employed to con-
duct a series of sensitivity analyses to investigate the impact of key insurance port-
folio parameters to insurer’s ruin probability and business operability. The considered
parameters include: insurance arrangement (e.g. deductible and cap), insurer’s initial sur-
plus, growth rate and volatility of non-catastrophic risk portfolio, and safety loading fac-
tor for catastrophic earthquake coverage. Seismic risk assessment and insurance portfo-
lio analysis using realistic building data and up-to-date knowledge on earthquake engi-
neering can offer valuable guidance in developing a reliable and affordable insurance
system.
This paper is organized as follows. In Sect. 2, a mathematical formulation of the stochas-
tic model of an insurer’s net worth consisting of ordinary risk and catastrophic earthquake
risk is given. This is followed by descriptions of the earthquake-engineering-based seismic
loss model for conventional wood-frame houses in Vancouver. In Sect. 3, a numerical exam-
ple is set up by focusing on hypothetical 4000 wood-frame houses in Vancouver that are
distributed within a 20 km by 16 km area and at different site conditions. Using the devel-
oped insurer’s net worth model, a series of sensitivity analyses are performed to identify key
parameters of insurance portfolio; the results are employed to draw important observations
and conclusions about insurer’s risk management strategy by balancing insurer’s insolvency
and business operability.
123
618 Bull Earthquake Eng (2012) 10:615–643
Fig. 1 Insurer’s net worth process
2 Insurer’s net worth model
Insurers undertake various kinds of risks from households and companies, collect insurance
premiums for compensation, and invest available fund for financial return. The role of insur-
ance in a modern world is important, as it reduces temporal variability of financial conditions
of stakeholders and enhances their economical and operational stability. Insurers are able
to smooth out their financial conditions by dealing with numerous stakeholders. However,
when insurers undertake catastrophic risks, effectiveness of conventional diversification tech-
niques is severely limited. In a real world, there is a significant concern for insurers about
their potential insolvency due to catastrophic risks (Kleindorfer and Kunreuther 1999;Grace
et al. 2003). The focus of this section is to develop a stochastic net worth model of an insurer
subjected to both non-catastrophic and catastrophic risks.
2.1 Stochastic process of insurer’s net worth
Consider that an insurer is exposed to two risks, non-catastrophic risk and catastrophic risk.
In the context of this study, the non-catastrophic risk corresponds to undertaking of ordinary
insurance and investment (e.g. property and liability insurance), whereas the catastrophic
risk corresponds to underwriting of earthquake insurance in a seismic region. The insurer’s
net worth W(t)can be represented by a mixed diffusion-jump stochastic model (Powers and
Ren 2003):
W(t)=WNonCat(t)+WCat (t), (1)
where WNonCat(t)is the insurer’s worth related to non-catastrophic risk with initial surplus
W0,andWCat (t)is the insurer’s worth associated with catastrophic risk. An insurer’s net worth
process subjected to both non-catastrophic and catastrophic risks is illustrated in Fig. 1.The
diffusion process WNonCat (t)is under ceaseless fluctuation with positive drift due to pay-out
and pay-in from insurance contracts and investment. The catastrophic jump process WCat(t)
occurs infrequently, but when it occurs, a drop of the insurer’s net worth can be extremely
large. Potentially, such a downward jump causes a negative net worth, forcing an insurer to
become insolvent; an incidence of W(t)<0 defines insolvency. In addition to the possibility
of insolvency, insurers might be concerned about other criteria/situations, such as zero profit
and operational restriction.
The random process WNonCat (t)can be described by a stochastic differential equation.
Ren (2005) suggested that one-dimensional stochastic differential models can be used to
approximate the insurer’s net worth process for non-catastrophic risk, which is simplifi-
123
Bull Earthquake Eng (2012) 10:615–643 619
cation of multiple simultaneous random processes, such as incurred loss process, pay-out
process, expense process, and investment process. One of the most popular stochastic mod-
els for financial modeling of an asset process is the geometric Brownian motion, which is
given by (Ross 2003;Ren 2005):
dWNonCat(t)=αWNonCat (t)dt+βWNonCat (t)dZ(t), (2)
where αand βare the instantaneous growth rate and infinitesimal standard deviation
(or volatility) of the insurer’s worth (in terms of WNonCat (t)) related to non-catastrophic
risk, respectively, and Z(t)is a standard Brownian motion (or Wiener process) due to pertur-
bation of non-catastrophic risk processes. The drift and volatility parameters αand βcan be
estimated from relevant insurance data. Ren (2005) proposed several formulas to estimate α
and βby using available insurance claim/expense databases; values of α=0.06 and β=0.11
are obtained for a typical property-liability insurer in the United States based on the National
Association of Insurance Commissioners database. It is noted that both αand βmight vary
significantly, depending on countries/regions, lines of business (e.g. motor, liability, health,
and etc.), and economic climate. Some general guidance for selecting reasonable values of
αand βcan be found in various reports from major insurance and reinsurance companies.
For instance, Aon Benfield (2009) indicates that typical volatility in North America (United
States and Canada) ranges around 0.15–0.5, depending on the lines of business. Numerical
evaluation of Eq. (2) can be done using Monte Carlo simulation; with parameters W0,and
β, the insurer’s net worth at time tis given by:
WNonCat(t)=W0exp α0.5β2t+βZ(t),(3)
noting that Z(t)is the cumulative Brownian process.
The catastrophic component of the insurer’s worth WCat(t)can be modeled as a jump
process:
dWCat(t)=(1+θ)μλdtLEQ,I(t)dN(t), (4)
where θis the safety loading factor, LEQ,I(t)is the insurer’s aggregate seismic loss due to a
seismic event that occurs at time t, which is considered to be an independent and identically
distributed random variable with mean μ,andN(t)is a Poisson counting process with mean
λt. The product λμ equals the pure premium per unit time for the insurance portfolio. Note
that LEQ,I(t)is associated with aggregate seismic loss for the building portfolio of mprop-
erties and is a function of insurance arrangements for individual properties. Specifically, it
can be calculated as:
LEQ,I(t)=
m
i=1
IPLEQ(i)(t),(5)
where LEQ(i)(t)is the seismic loss to the i-th property in the portfolio, and IP()is the
insurance pay-out function and is given by:
IPLEQ(i)=
0LEQ(i)D
γ(LEQ(i)D)D<LEQ(i)<C
γ(CD)LEQ(i)C
,(6)
in which D,C,andγare the deductible, cap, and coinsurance factor, respectively. The
parameters can be adjusted to design various kinds of insurance pay-out structures to transfer
undesirable portions of the probability distribution of seismic loss per property to the insurer.
123
620 Bull Earthquake Eng (2012) 10:615–643
Fig. 2 Seismic loss estimation framework
By combining Eqs. (2)and(4), Eq. (1) can be rewritten as:
dW(t)=[αWNonCat(t)+(1+θ )λμ]dt+βWNonCat (t)dZ(t)LEQ,I(t)dN(t). (7)
In Eq. (7), the first term is related to premium income and investment return by dealing
with both non-catastrophic and catastrophic risks, the second term is due to pay-in/pay-out
fluctuation of non-catastrophic risk, and the third term is related to infrequent negative jumps
due to earthquakes. It is noteworthy that the jump process term can be substituted by an
earthquake-engineering-based seismic loss model, which is more rigorous and accurate than
a compound Poisson process.
2.2 Earthquake risk exposure modeling
In this study, a regional seismic loss model for conventional wood-frame houses in south-
western British Columbia, which was developed by Goda et al. (2011), is focused on. The
model is fully probabilistic and evaluated using Monte Carlo simulation; the seismic loss esti-
mation framework is illustrated in Fig. 2. The use of this seismic loss model is particularly
advantageous, because up-to-date seismological knowledge as well as region-specific infor-
mation on conventional wood-frame houses is incorporated consistently. In the following,
key features of the model components are presented. Interested readers are recommended
to refer to Goda et al. (2011) for more detailed model descriptions and sensitivity analysis
results related to seismic hazard modeling and structural demand/capacity modeling.
123
Bull Earthquake Eng (2012) 10:615–643 621
2.2.1 Probabilistic seismic hazard analysis for western Canada
Probabilistic seismic hazard analysis offers a rational framework to describe elastic seismic
demand in terms of peak ground motions and response spectra (Cornell 1968;McGuire 2004).
It incorporates earthquake occurrence models, seismic source zones, magnitude-recurrence
relations, and ground motion prediction equations (GMPEs) through total probability theo-
rem. The output from probabilistic seismic hazard analysis is the expected seismic intensity
that corresponds to a specified probability level, and is often presented as a uniform hazard
spectrum and seismic hazard deaggregation. For Canadian locations, seismic hazard infor-
mation given by Adams and Halchuk (2003) is especially relevant, as it forms the basis of
seismic provisions of the current National Building Code of Canada (NBCC2005). In this
study, an updated seismic hazard model for western Canada (Goda et al. 2010;Atkinson
and Goda 2011) is used to reflect newly available seismological information and models.
The key improvements include: (i) conversion of different magnitude scales into a uniform
moment magnitude scale in the earthquake catalog; (ii) reevaluations of magnitude-recur-
rence relations for different earthquake sources based on a longer and more homogeneous
catalog up to the end of 2008; (iii) use of newer GMPEs with proper distance measure
conversions; and (iv) consideration of probabilistic mega-thrust Cascadia scenario events.
The sensitivity analyses conducted by Goda et al. (2010) indicate that the use of recently
developed GMPEs, accounting for epistemic uncertainty, has significant impact on seismic
hazard estimates, whereas the implementation of proper distance measure conversion in
evaluating GMPEs has moderate impact on seismic hazard estimates. Another advantageous
feature of using recent GMPEs is that local site condition for seismic hazard assessment,
which is represented by the average shear-wave velocity in the uppermost 30 m from ground
surface VS30, can be flexibly adjusted for different site conditions other than the reference
site condition that corresponds to the NEHRP site class C (approximately VS30 of about
500–600 m/s).
The assessment is carried out based on Monte Carlo simulation. Firstly, earthquake occur-
rence models, seismic source zones, and magnitude-recurrence relations are used to produce
a synthetic earthquake catalog, which typically includes information on occurrence time,
source location, earthquake magnitude, fault length, and fault width; see Panel 1 in Fig. 2.
Then, by using suitable GMPEs and spatial correlation models of peak ground motions and
response spectra, simultaneous seismic effects at different locations are generated for each
seismic event (see Panel 2 in Fig. 2and Sect. 2.2.2). These two steps generate probabilistic
estimates of elastic seismic demands at multiple sites for many seismic events over a certain
period (i.e. multiple seismic intensity maps). This information is used as input in seismic
vulnerability analysis.
To show the impact of adopting the updated seismic hazard model instead of the
NBCC2005 model, four sets of seismic hazard estimates for Vancouver are compared in
Fig. 3: two estimates are obtained from the updated model for VS30 =555 and 200 m/s,
respectively, while the other two are obtained from the NBCC2005 model for VS30 =555
and 200 m/s, respectively. In Fig. 3a, mean uniform hazard spectra are presented, while in
Fig. 3b, mean seismic hazard curves for spectral acceleration at 0.3 s are shown. The choice of
spectral acceleration at 0.3 s is because it is adopted as a seismic intensity measure to charac-
terize seismic vulnerability models of conventional wood-frame houses (Goda and Atkinson
2011). For the same site condition, seismic hazard estimates at short vibration periods from
the updated model are smaller than those from the NBCC2005 model. In particular, for soft
soil condition, the difference between the seismic hazard estimates from the updated and
NBCC2005 models is significant (Fig. 3b).
123
622 Bull Earthquake Eng (2012) 10:615–643
Fig. 3 Probabilistic seismic
hazard analysis results for
Vancouver: auniform hazard
spectra and bseismic hazard
curves for spectral acceleration at
0.3 s
2.2.2 Generation of spatially correlated seismic intensities at multiple sites
Most GMPEs have been developed in association with probabilistic seismic hazard analysis,
which is primarily focused on a single site. However, if seismic hazard and risk assessments
at multiple sites are of interest, an extension of typical GMPEs is necessary by characterizing
their correlation structures (Goda and Hong 2008). With such correlation models, seismic
loss estimation of multiple buildings and infrastructural systems is greatly facilitated.
Consider spectral accelerations at two sites iand j,SAik (Ti)and SAjk (Tj), due to the
k-th seismic event in a synthetic earthquake catalog; the two sites are separated by a distance
ij ( km), and Tiand Tjrepresent vibration periods of linear elastic single-degree-of-freedom
oscillators at sites iand j, respectively. Using a suitable GMPE, SAik(Ti)and SAjk (Tj)are
characterized by:
123
Bull Earthquake Eng (2012) 10:615–643 623
log10 SAik(Ti)=f(Mk,Rik
ik,Ti)+ηk(Ti)+εik(Ti), (8a)
log10 SAjk(Tj)=fMk,Rjkjk,Tj+ηk(Tj)+εjk(Tj), (8b)
and,
ρTij,Ti,Tj=ρηTi,Tjση(Tiη(Tj)
σT(TiT(Tj)+ρε(ij,Ti,Tj)σε(Tiε(Tj)
σT(TiT(Tj),(8c)
where f(M,R,T)is the median GMPE as functions of the moment magnitude M,dis-
tance R(typically, closest distance from station to rupture plane), and additional explanatory
variables λ;η(T)is the inter-event residual with zero mean and standard deviation ση(T);
and ε(T)is the intra-event residual with zero mean and standard deviation σε(T).η(T)and
ε(T)are assumed to be independent and normally distributed, and the total standard devia-
tion σT(T)therefore equals [σ2
η(T)+σ2
ε(T)]0.5.InEq.(8c), ρT(ij,Ti,Tj)is the correlation
coefficient between ηk(Ti)+εik(Ti)and ηk(Tj)+εjk(Tj);ρη(Ti,Tj)is the inter-event corre-
lation coefficient between ηk(Ti)and ηk(Tj);andρε(ij,Ti,Tj)is the intra-event correlation
coefficient between εik(Ti)and εjk (Tj). The inter-event residual is calculated as the overall
deviation of spectral accelerations at multiple recording stations from a GMPE for a given
seismic event (i.e. deviation of event-based spectral acceleration from a GMPE), whereas
the intra-event residual is evaluated as the deviation of spectral acceleration at a particular
site from the event-based spectral acceleration. These correlations can be evaluated by sta-
tistical analysis of regression residuals (Goda and Hong 2008;Goda and Atkinson 2009,
2010).
Two spatial correlation models of seismic effects (i.e. ρη(Ti,Tj)and ρε(ij,Ti,Tj)in
Eq. (8c)) are adopted in this study: the GH08 model (Goda and Hong 2008) for shallow crustal
earthquakes in North America, and the GA09 model (Goda and Atkinson 2009,2010)for
Japanese earthquakes. The GH08 model may be suitable for shallow crustal earthquakes in
south-western British Columbia, as it is based on statistical analysis of strong ground motion
records from the PEER-NGA database (Power et al. 2008), while the GA09 model may
be applicable to inslab and interface earthquakes in the Cascadia subduction zone, because
many GMPEs for subduction earthquakes have been developed by including ground motion
records from the K-NET and KiK-NET databases (Aoi et al. 2004). Specifically, the GH08
model consists of:
ρη(Ti,Tj)=1cos π
20.359 +0.163ITmin <0.189 logeTmin
0.189 logeTmax
Tmin ,
(9a)
(adopted from Baker and Cornell (2006); see Goda and Hong (2008)), and:
ρε(ij,Ti,Tj)=exp 0.68 0.16 loge(Tmax)0.44
ij ,(9b)
where Tmax and Tmin are the larger and smaller of Tiand Tj, respectively; ITmin<0.189 is the
indicator function which equals one if Tmin is <0.189s and equals zero otherwise. The GA09
model is given by:
123
624 Bull Earthquake Eng (2012) 10:615–643
Fig. 4 Comparison of the GH08
and GA09 models for spectral
acceleration at 0 .3 s
ρη(Ti,Tj)
=1
31cos π
21.374 +5.586ITmin <0.25 Tmin
Tmax 0.728
log10 Tmin
0.25 log10 Tmax
Tmin 
+1
31+cos 1.5log
10 Tmax
Tmin ,(10a)
and
ρε(ij)=max 1.389 exp 0.2070.386
ij 0.389,0,(10b)
where ITmin<0.25 is the indicator function with a threshold value of 0.25 s.
To illustrate the difference between the GH08 and GA09 models, total correlation coef-
ficients ρTbased on these two models are compared in Fig. 4for spectral acceleration at
0.3 s. For this comparison, the ratio of inter-event or intra-event standard deviation to total
standard deviation is calculated from the GMPE by Hong and Goda (2007) for the GH08
model and from the GMPE by Goda and Atkinson (2009) for the GA09 model. Fig. 4clearly
shows that the GA09 model predicts much higher correlation than the GH08 model for the
same separation distance; the decay of the correlation coefficient as a function of separation
distance is more gradual for the GA09 model than the GH08 model, and the plateau level
that is reached for relatively large separation distances (e.g. 100km) is higher for the GA09
model than the GH08 model (0.3 vs. 0.1). It is noteworthy that the correlation coefficient
is a relative quantity related to proportions of uncertainty for different components and the
difference of the two models in terms of intra-event variability or semivariogram is less
than that in terms of total correlation coefficient. Moreover, variability of the spatial corre-
lation around the average trend is large (Goda and Hong 2008;Goda and Atkinson 2009,
2010). It is thus prudent to use a GMPE and the corresponding correlation model as a set
in seismic hazard/risk analysis; arbitrary mixture of GMPEs and correlation models that are
developed based on different record sets and have different magnitudes of uncertainty may
induce unintended bias in the assessment results.
123
Bull Earthquake Eng (2012) 10:615–643 625
2.2.3 Seismic vulnerability and loss assessment of wood-frame houses
Structural vulnerability models quantify the extent of structural damage for a given seismic
excitation, and have significant impact on seismic loss estimation. The first task in seismic
vulnerability assessment is collection of comprehensive building inventory information, such
as location, structural and material type, age, story number, occupancy type, floor area, value,
and local soil condition (Panel 3 in Fig. 2). The building inventory database is the key input
in seismic loss estimation, and determines overall accuracy of the assessment. Based on the
collected building information, a suitable structural model and analysis method need to be
selected by achieving a reasonable balance between accuracy and computational efficiency.
Damage severity assessment is then carried out by directly comparing sustained inelastic
seismic demand with ultimate structural capacity (Panel 4 in Fig. 2). For an individual struc-
ture, the assessed damage severity is described by the damage factor δ, which is defined
by:
δ=max min DDDY
DCDY,1,0,(11)
where DDis the seismic demand variable, and DYand DCare the yield and ultimate capac-
ity variables of a structure, respectively; these variables are expressed in terms of maximum
inter-story drift ratio and are treated as random variables. The damage factor δcorresponds
to “no seismic damage”, “partial seismic damage”, and “complete seismic damage” for
δ=0,0<δ<1, and δ=1, respectively (note: δis a continuous variable between 0
and 1).
In this study, probabilistic models of the maximum inter-story drift ratio of wood-frame
houses with different shear-wall types (Goda and Atkinson 2011) are adopted for assess-
ing DD. The models are based on extensive incremental dynamic analyses of the UBC-
SAWS models (White and Ventura 2006). The UBC-SAWS models are based on SAWS
(Seismic Analysis of Wood-frame Structures) by Folz and Filiatrault (2004) and calibrated
by researchers at University of British Columbia (UBC) using quasi-static and dynamic test
results for conventional wood-frame houses in Vancouver (Ventura et al. 2002;White and
Ventura 2006). There are four UBC-SAWS models with different shear-wall characteris-
tics: (i) House 1 has blocked plywood/oriented strand board shear-walls with exterior stucco
cladding and gypsum wallboard interior finish; (ii) House 2 has blocked plywood/oriented
strand board shear-walls with gypsum wallboard interior finish; (iii) House 3 has unblocked
plywood/oriented strand board shear-walls with gypsum wallboard interior finish; and (iv)
House 4 has horizontal boards with gypsum wallboard interior finish. These four models can
be assigned to individual houses by utilizing information on built year; in this study, a house
is classified as House 1, House 2, House 3, and House 4, if the construction year is “after
1996”, “from 1986 to 1995”, “from 1976 to 1985”, and “before 1975”, respectively (Ventura
et al. 2005; note this classification is approximate, as discussed in Goda et al. (2011)).
The distinct features of the developed prediction models of the maximum inter-story
drift ratio include careful record selection and scaling based on the conditional mean spec-
trum (CMS) (Baker 2011) by considering complex seismic hazard contributions from dif-
ferent earthquake sources (i.e. crustal events, inslab events, and interface events). This is
termed as the CMS-Event-based method in Goda and Atkinson (2011). Specifically, for the
CMS-Event-based method, 50 records were selected from a large pool of strong ground
motion records in the PEER-NGA and K-NET/KiK-NET databases by matching the geo-
metric mean response spectrum of two horizontal components with one of the three target
conditional mean spectra for crustal events, inslab events, and interface events (referred
123
626 Bull Earthquake Eng (2012) 10:615–643
to as the CMS-Crustal, CMS-Inslab, and CMS-Interface, respectively). The CMS-Event-
based method takes into account inter-period correlation of spectral acceleration ordinates
for different earthquake types individually; such consideration is of importance when record
scaling is involved (Luco and Bazzurro 2007;PEER GMSM Group 2009;Yin and Li 2010).
To distinguish broadly different typical site conditions in Vancouver, two record sets are
constructed: one is for soft soils with 100 VS30 360 m/s, and the other is for firm soils
with 250 VS30 700m/s (note: an overlap of the VS30 range was necessary to construct
a relatively large record pool for each site condition). Response spectra of the selected 50
records based on the CMS-Event-based method together with three target conditional mean
spectra and corresponding uniform hazard spectrum, are illustrated in Fig. 5for the soft and
firm soil conditions. It is observed that inslab events tend to have rich spectral content at short
vibration periods (particularly for the firm soil condition), in comparison with crustal and
interface events. To show the impact of different shear-wall types (i.e. different UBC-SAWS
models), incremental dynamic analysis curves for Houses 1–4 are compared in Fig. 6for the
soft and firm soil conditions (median curve is shown as a solid line, while 0.16- and 0.84-
fractile curves are shown as a broken line). The seismic performance of the four UBC-SAWS
models can be ranked as House 1 > House 2 > House 3 > House 4. The results, as shown
in Fig. 6, are then used to develop the probability distribution of the maximum inter-story
drift ratio given spectral acceleration at 0.3s for each local soil condition by interpolating the
empirical probability distribution function.
For the seismic capacity parameters DYand DC, reasonable estimates of DYand DCin
terms of maximum inter-story drift ratio can be found in FEMA and NIBS (2003)andOnur
et al. (2006). In this study, both DYand DCare considered to be lognormally distributed.
It is noteworthy that there exists significant uncertainty in the estimation of the ultimate
capacity of wood-frame structures. In shake-table experiments, structures tested by White
and Ventura (2006) did not collapse after experiencing the maximum inter-story drift ratio of
0.08–0.09, while Isoda et al. (2007) reported that structural collapse generally occurs after
experiencing the maximum inter-story drift ratio of 0.15–0.20. Another relevant issue is that
severely-damaged but not-collapsed structures may be demolished in the rehabilitation pro-
cess, because of infeasible repair operations from technical and economic viewpoints. By
taking these situations into account, three cases of the mean yield and ultimate maximum
inter-story drift ratios for Houses 1 to 4, are considered to capture their epistemic uncertainty
and are listed in Table 1. The values of the mean capacities for the base case are consistent
with FEMA and HAZUS (2003). The coefficients of variation for the yield and ultimate
maximum inter-story drift ratios are set to 0.15 and 0.3, respectively, which are within a
typical range of the parameters reported/considered in the literature (e.g. Ellingwood et al.
1980;Goulet et al. 2007;Black et al. 2010;Yin and Li 2010).
For loss estimation, δis transformed to seismic loss quantities through adopted damage-
loss functions. In this study, damage costs are categorized into three types: repair/replacement
costs related to damaged building parts LRR (δ) (RR type), costs for lost contents LCO (δ)
(CO type), and costs related to business interruption LBI (δ) (BI type). They are approximated
by power functions:
LRR(δ) =CRRδβRR ,LCO ) =CCOδβCO ,and LBI) =CBIδβBI ,(12)
where CRR,CCO ,andCBI are the replacement costs, and βRR
CO,andβBI are the exponent
parameters for damage cost types RR, CO, and BI, respectively. To develop damage-loss
functions that are applicable to wood-frame houses in Vancouver, the use type defined by the
City of Richmond is related to the HAZUS-Earthquake occupancy type (FEMA and NIBS
2003;Ventura et al. 2005), and the exponent parameters of the damage-loss functions are
123
Bull Earthquake Eng (2012) 10:615–643 627
Fig. 5 Comparison of response
spectra of 50 records based on the
CMS-Event-based method: asoft
soil with VS30 =200 m/s and b
firm soil with VS30 =555 m/s
determined by least squares curve fitting. The parameters for four occupancy types that are
prevalent in the building inventory used by Goda et al. (2011) are summarized in Table 2.
The unit replacement costs are considered to be lognormally distributed with the coefficient
of variation of 0.3. These costs are estimated based on FEMA and NIBS (2003) and do not
necessarily reflect market/taxation values.
Finally, seismic loss estimation can be conducted based on Monte Carlo simulation to
obtain samples of the aggregate seismic loss LEQ (t
D)during a period of tyears for all
buildings and for all seismic events included in the synthetic earthquake catalog. Specifi-
cally, LEQ(t
D)is calculated by:
123
628 Bull Earthquake Eng (2012) 10:615–643
Fig. 6 Comparison of
incremental dynamic analysis
curves for different house
models: asoft soil with
VS30 =200 m/s and bfirm soil
with VS30 =555 m/s
LEQ(t
D)=
n(t)
k=1
m
i=1
LEQ(i)k)exp(γ
Dτk)
=
n(t)
k=1
m
i=1
[LRRik)+LCOik)+LBIik)]exp(γ
Dτk), (13)
where n(t)is the number of earthquakes that occur during a period of tyears, mis the number
of houses, γ
Dis the discount rate, and τkis the occurrence time of the k-th earthquake in
the catalog. For simplicity, seismic damage is considered to be repaired immediately after
a seismic event. In cases where annual aggregate seismic loss is of interest, the effects of
discounting may be ignored (i.e. γ
D=0). The generated seismic loss samples can be used
123
Bull Earthquake Eng (2012) 10:615–643 629
Tab le 1 Mean yield and ultimate maximum inter-story drift capacity ratios
UBC-SAWS model Yield capacity: Drift ratio (weight) Ultimate capacity: Drift ratio (weight)
House 1 [0.004(0.25), 0.006(0.5), 0.008(0.25)] [0.075(0.25), 0.1(0.5), 0.125(0.25)]
House 2 [0.004(0.5), 0.006(0.5)] [0.05(0.25), 0.075(0.5), 0.1(0.25)]
House 3 [0.004(0.5), 0.006(0.5)] [0.05(0.25), 0.075(0.5), 0.1(0.25)]
House 4 [0.002(0.5), 0.004(0.5)] [0.05(0.5), 0.075(0.5)]
to construct the probability distribution of LEQ (t
D)(i.e. seismic loss curve) and to identify
significant scenario events through deaggregation analysis (Panel 5 in Fig. 2). The seismic
loss samples from Eq. (13) can be used to obtain those incurred by an insurer by applying an
insurance policy as in Eq. (6).
3 Earthquake insurance portfolio analysis of wood-frame houses in south-western
British Columbia, Canada
3.1 Regional portfolio of wood-frame houses
To investigate insurer’s insolvency potential due to both non-catastrophic and catastrophic
risks using the developed stochastic net worth model, a hypothetical but realistic portfolio
of conventional wood-frame houses in south-western British Columbia is constructed. The
portfolio is based on an existing building inventory of 1415 wood-frame houses located in the
City of Richmond which was used by Goda et al. (2011). The occupancy is mostly residential
for single-family dwelling, and the original inventory database does not include large-size
wood-frame structures with square footage >3500 ft2. To create a new regional portfolio, a
rectangular region of 16 km (northing) by 20 km (easting) is defined; this region is divided
into 20 sub-regions of 4km by 4km. The geographical locations of the entire region and
sub-regions are shown in Fig. 7. For the purpose of investigation, 40 groups, each consisting
of 100 houses randomly and uniformly distributed within a sub-region, are defined. In total,
there are 4000 houses in the entire region. Two groups are assigned to each sub-region (i.e.
there are 200 houses in each sub-region); indices for the 40 groups are shown in Fig. 7.
The structural characteristics of individual houses (i.e. building area, height, occupancy, and
construction year) are randomly selected from those of the 1415 existing houses in Richmond
(note: information on construction year, which qualitatively reflects seismic performance of
wood-frame houses, is employed to assign one of the four UBC-SAWS models to an indi-
vidual house). Therefore, the generated portfolio of the 4000 houses resembles regional
characteristics of wood-frame houses in Richmond.
It is noteworthy that local soil conditions affect seismic hazard and risk assessment
significantly (see Figs. 3,6,9). To assign reasonable VS30 values to individual house loca-
tions, a regional database of estimated VS30 values from various geophysical measurements
(e.g. borehole and seismic refraction tests) in south-western British Columbia, developed
by Hunter et al. (1998), is adopted. Furthermore, Cassidy and Rogers (2004) reported that
near-surface soil sediments in the City of Vancouver are Pleistocene, having VS30 =500 m/s
with significant variability. From the collected and assumed soil information, a contour map
of VS30 is constructed and shown in Fig. 7. The map displays that the local soil condition
in the City of Richmond (southern part of the target area) is close to the NEHRP site class
123
630 Bull Earthquake Eng (2012) 10:615–643
Tab le 2 Model parameters of damage-loss functions for different occupancy types
Occupancy Replacement cost (CAD/ft2):
[CRR,CCO ,CBI]
Exponent parameter:
[βRR
CO
BI]
Single-family dwelling [87.2, 21.8, 19.8] [0.754, 0.681, 0.568]
Multi-family dwelling [111.0, 27.8, 26.2] [0.810, 0.681, 0.617]
Store/commercial service [47.6, 26.4, 23.8] [0.808, 0.681, 0.426]
Restaurant [149.4, 74.7, 228.1] [0.851, 0.681, 0.541]
Fig. 7 Regional view of the target area in south-western British Columbia together with VS30 contour map
(note: geographical coordinates are based on the Universal Transverse Mercator system)
D/E boundary, with VS30 ranging from 150 to 250m/s, whereas that in the City of Vancouver
(northern part of the target area) is close to the NEHRP site class C, with VS30 ranging from
400 to 600 m/s. Based on the contour map, an average VS30 value of 200m/s is assigned to
the lower two rows of the target area; an average VS30 value of 400 m/s is assigned to the
upper middle row of the target area; and an average VS30 value of 500 m/s is assigned to the
upper row of the target area. In this study, VS30 is modeled as a lognormal variate, and for all
locations, the coefficient of variation of 0.075 is used.
3.2 Seismic loss estimation of wood-frame houses
Quantitative seismic loss estimation of 4000 wood-frame houses is conducted. The simu-
lation duration is set to 1 million years; thus 106samples of annual aggregate seismic loss
are available for seismic loss calculations and can be used to construct annual seismic loss
curves for the entire building inventory and for sub-region groups.
First, overall characteristics of aggregate seismic loss for the entire inventory are investi-
gated by considering three spatial correlation models of seismic effects at different locations:
no correlation, partial correlation, and full correlation. The partial correlation is the most
realistic case, and is based on the GH08 model for shallow crustal earthquakes and the
GA09 model for inslab and interface subduction earthquakes (Sect. 2.2.2). The results for
the three cases are compared in Fig. 8, and show that the differences of three curves, which
intersect one another, become significant as the annual probability of exceedance decreases
123
Bull Earthquake Eng (2012) 10:615–643 631
Fig. 8 Impact of using different
spatial correlation models on
seismic loss estimation (for each
curve, 4000 houses are
considered)
(or the return period increases). This clearly illustrates the importance of adopting an ade-
quate spatial correlation model for seismic loss estimation. It is worth mentioning that the
mean annual seismic loss (about 0.89 million CAD) is the same for the three cases. The heavy
right tail of the aggregate seismic loss distribution for the full correlation case is counter-
acted by less frequency of seismic loss occurrence (note: occurrence rates of seismic loss are
0.1546, 0.1183, and 0.0582 for the no, partial, and full correlation cases, respectively). This
fact should be kept in mind, especially if an incorrect spatial correlation model is employed
for seismic loss estimation, as the upper tail of the probability distribution is significantly
affected. In such cases, decision-making by risk analysts and managers may be mislead by
incorrect perception/appreciation of the assessment results (note: it is well-known that the
tail of a probability distribution of a decision variable can induce risk-averse or risk-taking
behavior). The above remarks/observations are also applicable to insurer’s earthquake risk
exposure.
Next, impact of local soil conditions is evaluated by comparing the results for 1000 houses
that are located in different site conditions of VS30 =200, 400, and 500 m/s (i.e. the lower
middle row, upper middle row, and upper row of the target area; see Fig. 7). The seismic loss
curves for the three local soil conditions are compared in Fig. 9. The seismic loss for soft
soil sites is greater by a factor of two (note: the difference between VS30 =400 and 500 m/s
is not significant). The significant difference of the estimated seismic loss is caused by the
combined effects of different seismic hazard levels (Fig. 3) and different record character-
istics in terms of incremental dynamic analysis curves (Fig. 6). Therefore, it is of utmost
importance to obtain accurate local soil information at the property sites for earthquake risk
mitigation and earthquake insurance analysis.
Another important aspect in earthquake risk management is seismic loss dependence for
multiple building inventories/portfolios (Goda and Ren 2010). The correlation of the seismic
effects decreases with separation distance. Thus it is sensible to combine portfolios that are
geographically remote to diversify earthquake insurance risk. To demonstrate this, seismic
loss curves of two groups (each with 100 houses on soft soils) that have different average
separation distances, ranging from 0 to 16 km, are compared in Fig. 10 (e.g. a combined
portfolio of groups 3 and 4 corresponds to a case with 0 km, while a combined portfolio of
123
632 Bull Earthquake Eng (2012) 10:615–643
Fig. 9 Impact of local soil
conditions on seismic loss
estimation for the partial
correlation case (for each curve,
1000 houses are considered)
groups 3 and 11 corresponds to a case with 4 km; see Fig. 7). Figure 10 shows that with the
increase of separation distance, fractile values of the combined seismic loss of 200 houses
tend to decrease (note: average seismic loss exposure is the same for the five portfolios, but
right tail behavior is different). It is noted that the spatial extent for each group (i.e. 100
houses over a 4km by 4km area) is relatively large in terms of critical separation distance of
the spatial correlation models; for instance, the results shown in Fig. 4indicate that spatial
correlation decays rapidly if the separation distance is greater than a few kilometers. There-
fore, the impact of building portfolio integration/aggregation in terms of spatial correlation
can be greater than those shown in Fig. 10. To visually inspect tail dependence of seismic loss
samples from two groups, scatter plots of seismic loss samples for groups 3 and 4 (0 km sep-
aration) and groups 3 and 35 (16 km separation) are presented in Fig. 11. The results clearly
show different characteristics of seismic loss dependence as a function of separation distance;
for the separation distance of 0 km (Fig. 11a), a large loss in one group is highly likely to
be associated with a large loss in the other group, while for the separation distance of 16 km
(Fig. 11b), seismic loss dependence, particularly in the upper tail, is reduced significantly;
hence, chance of simultaneous exceedance of very high fractile values is not high for the
latter case. The upper tail dependence of aggregate seismic loss is important for catastrophic
insurance and reinsurance portfolio management.
3.3 Earthquake insurance portfolio analysis
3.3.1 Impact of earthquake insurance arrangements on insurer’s risk exposure
The insurance arrangement has direct influence on insurer’s earthquake risk exposure. How-
ever, the arrangement is not usually under the control of an insurer. Rather, a regulatory body
determines types of insurance products that can be sold to home and business owners. In this
section, several insurance arrangements are applied to investigate their effects on insurer’s
earthquake risk exposure curves (note: input seismic loss samples are obtained from the
above-mentioned earthquake-engineering-based model with partial correlation for the 4000
houses). More specifically, eight arrangements (including full insurance case) are considered
123
Bull Earthquake Eng (2012) 10:615–643 633
Fig. 10 Impact of average
separation distances of two
portfolios on seismic loss
estimation for the partial
correlation case (for each curve,
200 houses are considered)
Tab le 3 Statistics of insurer’s earthquake risk exposure for 4000 houses
Insurance
arrangement:
[D,C]
Annual claim
rate
Mean (1000
CAD)
Standard
deviation
(1000 CAD)
Mean+0.5×Standard
deviationa(1000
CAD)
[0.0, 1.0, 1.0]b0.1180 894.7 10303.3 6046.4
[0.1, 0.5, 1.0]c0.0737 319.8 4813.6 2726.6
[0.2, 0.5, 1.0] 0.0450 139.6 2706.0 1492.6
[0.3, 0.5, 1.0] 0.0291 63.7 1451.4 789.4
[0.1, 0.75, 1.0] 0.0737 359.1 5867.6 3292.9
[0.1, 1.0, 1.0] 0.0737 388.0 6718.2 3747.1
[0.1, 0.5, 0.5] 0.0737 159.9 2406.8 1363.3
[0.1, 0.5, 0.75] 0.0737 239.8 3610.2 2045.0
aThe mean+0.5×(standard deviation) can be considered as a benchmark earthquake insurance premium,
including risk premium (Kuzak and Larsen 2005)
bThis corresponds to full insurance and thus the whole earthquake loss is incurred by an insurer
cThis is the base case
by varying values of deductible D, cap C, and coinsurance factor γ. The results, including
the calculated statistics of the insurer’s earthquake risk exposure for different arrangements,
are summarized in Table 3. The mean plus a half standard deviation, listed in Table 3, can
be regarded as a benchmark premium for earthquake loss coverage in California (Kuzak
and Larsen 2005). The base case is set to [D,C] = [0.1, 0.5, 1.0]. No distinction among
different seismic loss types (structural parts, non-structural parts, content, and business inter-
ruption) is taken into account. With this setup, the cap of 0.5 is deemed to be reasonable
for a practical earthquake insurance arrangement, because some exemption clauses usually
exist to limit insurer’s liability. Generally speaking, the deductible changes both annual claim
rate and loss distribution, whereas the cap and coinsurance factor affect the loss distribution
exceeding the deductible only.
123
634 Bull Earthquake Eng (2012) 10:615–643
Fig. 11 Scatter plot of seismic
loss samples from two groups for
the partial correlation case: a
0 km separation distance and b
16 km separation distance
To inspect the impact of insurance arrangements on insurer’s earthquake risk exposure,
annual seismic loss curves for different arrangements are compared in Fig. 12. The results
show that a high deductible and low coinsurance factor reduce earthquake risk exposure
significantly at wide probability levels (Fig. 12a, c), whereas a low cap limits the maximum
earthquake risk exposure effectively (Fig. 12b). The tapering of heavy right tail of the loss
distribution is achieved by the cap. Moreover, the cap affects the deficit at ruin, which may
be one of the key decision variables in earthquake insurance management. The preceding
results indicate that from insurer’s viewpoint, flexibility to select a reasonable arrangement
can be beneficial to enhance the performance of earthquake portfolio management and to
offer affordable insurance rates to their clients.
Another important aspect that should not be overlooked is the differentiation of houses
according to local soil condition in evaluating earthquake risk exposure. This is evident from
123
Bull Earthquake Eng (2012) 10:615–643 635
Fig. 12 Impact of earthquake
insurance arrangements: a
deductible D,bcap C,andc
coinsurance factor γ
123
636 Bull Earthquake Eng (2012) 10:615–643
Tab le 4 Statistics of insurer’s earthquake risk exposure for 2000 houses on soft soils
Insurance
arrangement:
[D,C]
Annual claim rate Mean (1000 CAD) Standard deviation
(1000 CAD)
Mean+0.5×Standard
deviationa(1000
CAD)
[0.0, 1.0, 1.0]b0.117 651.9 6986.3 4145.0
[0.1, 0.5, 1.0]c0.0728 238.9 3296.0 1886.9
[0.2, 0.5, 1.0] 0.0443 107.4 1930.7 1072.7
[0.3, 0.5, 1.0] 0.0288 50.2 1071.0 585.7
[0.1, 0.75, 1.0] 0.0728 271.1 4119.9 2331.0
[0.1, 1.0, 1.0] 0.0728 295.0 4802.1 2696.0
[0.1, 0.5, 0.5] 0.0728 119.4 1648.0 943.4
[0.1, 0.5, 0.75] 0.0728 179.1 2472.0 1415.1
aThe mean+0.5×(standard deviation) can be considered as a benchmark earthquake insurance premium,
including risk premium (Kuzak and Larsen 2005)
bThis corresponds to full insurance and thus the whole earthquake loss is incurred by an insurer
cThis is the base case
Fig. 9. To assess the impact of local soil conditions on insurer’s earthquake risk exposure, the
same statistics listed in Table 3are summarized in Tables 4and 5for 2000 houses on soft soils
and 2000 houses on firm soils (see Fig. 9), respectively. Comparison of the results shown in
Tables 4and 5indicates that for the same total insured sum, the annual claim occurrence rate
for firm soil sites is about a half of that for soft soil sites, while the mean earthquake risk
exposure for the former is about one third of that for the latter. The benchmark insurance
premium for firm soil sites is about a half of that for soft soil sites. Such differentiation is
critically important from insured’s viewpoint (as clients with low risk have to pay dispro-
portionate rates to partly compensate for those with high risk), and could induce undesirable
consequences to the entire earthquake insurance system, such as adverse selection and moral
hazard (Grace et al. 2003). This means that when insurance rates are determined for stake-
holders with different earthquake risk exposure characteristics, local soil conditions and other
key factors (e.g. construction year and structural type) need to be taken into consideration
(note: detailed insurance rate-making is outside the scope of this study).
3.3.2 Impact of non-catastrophic and catastrophic risk characteristics on insurer’s
insolvency and business operability
In reality, insurers deal with both non-catastrophic risk, associated with ordinary insurance
and investment, and catastrophic earthquake risk simultaneously. They balance proportions of
non-catastrophic and catastrophic risks in the portfolio to meet various requirements for reg-
ulatory purposes. The requirements are often related to potential insolvency and operational
instability. In this section, the impact of insurer’s portfolio characteristics on probabilities of
ruin and operational business restriction is investigated by considering the net worth model
given by Eq. (7).
For setting up realistic numerical examples, the following parameter values are considered:
(i) initial surplus of an insurer W0is 10, 25, 50, 75, or 100 million CAD; (ii) instantaneous
growth rate αis 0.03, 0.06, or 0.09; (iii) volatility βis 0.1, 0.2, 0.3, or 0.5; and (iv) safety
loading factor θis 0.0, 1.0, 2.0, 3.0, or 5.0. The adopted values of W0are within a typi-
cal range for undertaking earthquake risk exposure of the 4000 houses, as the catastrophic
123
Bull Earthquake Eng (2012) 10:615–643 637
Tab le 5 Statistics of insurer’s earthquake risk exposure for 2000 houses on firm soils
Insurance
arrangement:
[D,C]
Annual claim rate Mean (1000 CAD) Standard deviation
(1000 CAD)
Mean+0.5×Standard
deviationa(1000
CAD)
[0.0, 1.0, 1.0]b0.0734 242.8 3672.3 2079.0
[0.1, 0.5, 1.0]c0.0378 80.9 1725.1 943.5
[0.2, 0.5, 1.0] 0.0190 32.2 915.6 490.0
[0.3, 0.5, 1.0] 0.0105 13.5 464.6 245.8
[0.1, 0.75, 1.0] 0.0378 88.0 2011.2 1093.6
[0.1, 1.0, 1.0] 0.0378 93.0 2229.3 1207.6
[0.1, 0.5, 0.5] 0.0378 40.5 862.6 471.7
[0.1, 0.5, 0.75] 0.0378 60.7 1293.8 707.6
aThe mean+0.5×(standard deviation) can be considered as a benchmark earthquake insurance premium,
including risk premium (Kuzak and Larsen 2005)
bThis corresponds to full insurance and thus the whole earthquake loss is incurred by an insurer
cThis is the base case
exposure ratio, which is defined as the ratio of the expected catastrophic risk to the insurer’s
surplus, is <10%. For the considered cases, the catastrophic exposure ratio ranges from 0.06
to 3.0%. The assumed values of αand βare representative for property-liability insurers in
the United States and Canada, as given by Ren (2005)andAon Benfield (2009). The range
for θis selected somewhat arbitrarily; however, it is deemed to be reasonable for the purpose
of this study. In the following analyses, the insurance arrangement is set to the base case
[D,C]=[0.1,0.5,1.0], and the earthquake insurance pure premium (i.e. λμ in Eq. (7))
is determined directly from insurer’s earthquake loss samples.
For evaluating the overall performance of insurer’s portfolio, two criteria are considered:
(i) probability of ruin (i.e. chance of W(t)dropping below zero), and (ii) probability of
operational restriction (i.e. chance of W(t)dropping below 0.67W0). The probabilities of
ruin and operational restriction are assessed based on Monte Carlo simulation by generating
a diffusion process of non-catastrophic risk from the geometrical Brownian motion model
(Eq. 3) and by randomly sampling from simulated insurer’s seismic loss. The time horizon for
simulation is set to one year (note: some insurers may be interested in a longer time horizon,
such as 3–10 years), and the simulation cycle is set to one million times. The assessments
are conducted for all combinations of W0,andθ(i.e. 300 cases). The analysis results
are then used to investigate relationships between ruin/operational-restriction probability and
portfolio parameter (i.e. W0,orθ). Besides, the impact of spatial correlation models
of seismic effects on probabilities of ruin and operational restriction are examined in the
following, as they are expected to show noticeable influence.
Firstly, relationships between insurer’s ruin probability and portfolio parameter W0,
or θare investigated. To facilitate the comparison and visual inspection, two specific cases are
focused on Case 1: [W0,β,θ] = [50 million CAD, 0.06, 0.2, 3.0], and Case 2: [W0]
= [10 million CAD, 0.06, 0.1, 1.0], and only one of the four parameters is varied at a time.
The obtained results for W0,andθare shown in Fig. 13: for each parameter, results
for Case 1 and Case 2 as well as results for three correlation curves are shown in the figure.
BasedonFig.13, the following observations can be made:
The ruin probability decreases with the increase in W0(Fig. 13a). The decrement of ruin
probability for W0around 10–30 million CAD is significant, whereas the decrement
123
638 Bull Earthquake Eng (2012) 10:615–643
Fig. 13 Probability of ruin for two cases of insurance portfolio parameters by considering three correlation
models: avariation of W0,bvariation of α,cvariation of β,anddvariation of θ
becomes more gradual as W0increases. This is due to heavy right tail of the loss distri-
bution. It is noteworthy that ruin probability depends on the spatial correlation model.
For smaller initial asset values, relatively frequent occurrence of moderate seismic loss
events, caused by less spatial correlation, leads to greater ruin probability. By contrast,
for larger initial asset values, extreme seismic loss events, caused by higher spatial corre-
lation, are required to trigger insurer’s ruin more frequently. Therefore, the relative risk
order of the three correlation models depends on W0.
The impact of αon ruin probability is insignificant, as evidenced by flat curves in terms
of α(Fig. 13b). The impact of spatial correlation models on ruin probability can be
seen from the figure (i.e. relative position of the three curves is different for Case 1 and
Case 2).
The impact of βon ruin probability is minor; a larger value of βresults in greater
ruin probability (Fig. 13c). Greater variability of non-catastrophic risk tends to increase
chance of insolvency, as the insurer’s surplus fluctuates more widely, and when the sur-
plus becomes lower, occurrence of relatively frequent moderate-size earthquake loss
events is sufficient to cause negative surplus.
123
Bull Earthquake Eng (2012) 10:615–643 639
Fig. 14 Probability of operational restriction for two cases of insurance portfolio parameters by considering
three correlation models: avariation of W0,bvariation of α,cvariation of β,anddvariation of θ
The impact of θon ruin probability is insignificant, as indicated by flat curves in terms of
θ(Fig. 13d). This result has an important implication, as charging of high risk premium
may not be justified on the basis of reducing ruin probability.
Secondly, relationships between probability of operational restriction and portfolio param-
eter W0,orθare examined. For comparison, the same two cases mentioned above are
focused on. The obtained results for W0,andθare shown in Fig. 14. Based on Fig. 14,
the following observations are drawn:
The relationships between probability of operational restriction and W0reveal interest-
ing trends (Fig. 14a); probability of operational restriction for Case 1 increases with W0,
which is counter-intuitive, while that for Case 2 decreases with W0, which is expected.
For Case 1 (with higher initial surplus and higher volatility), non-catastrophic risk is
the primary cause of operational instability, whereas for Case 2 (with lower initial sur-
plus and lower volatility), the effects of catastrophic earthquake risk are dominant over
non-catastrophic risk. There are several factors that lead to the seemingly contradictory
results. Because non-catastrophic risk is modeled as a geometric Brownian motion, prob-
123
640 Bull Earthquake Eng (2012) 10:615–643
ability of insurer’s surplus at a future time tgoing below a certain fraction of the current
surplus does not depend on the initial surplus. In other words, without catastrophic earth-
quake risk in the overall portfolio, probability of operational restriction is the same for
all W0values. In the present study, a catastrophic earthquake risk process is incorporated
into the net worth process. The earthquake insurance premium adds a constant sum to
the initial surplus; thus a margin to a failure boundary (in this case, 0.67W0) differs
for Case 1 from Case 2. If collected premium is 1.5 million CAD, the margin to the
failure boundary, measured as the ratio of the surplus at time to the boundary is about
1.53(=(50+1.5)/33.5)for Case 1 and 1.72(=(10+1.5)/6.7)for Case 2; thus distance
to the instability boundary is greater for the latter than the former. Moreover, occurrence
of earthquake loss pay-out that can trigger operational instability is more frequent for
Case 2 than for Case 1. Interestingly, the results shown in Fig. 14a can be interpreted
as follows: for certain combinations of portfolio parameters, there is a possibility that
an insurer can reduce its probability of operational instability by taking a more risky
financial position.
The impact of αfor Case 1, for which non-catastrophic risk dominates, can be signif-
icant, reducing probability of operational instability appreciably with the increase in
α(Fig. 14b). The results for Case 2 indicate that the impact of αis insignificant for
situations where catastrophic earthquake risk is more dominant.
The value of βhas significant impact on probability of operational restriction; for both
Case 1 and Case 2, probability of operational instability increases rapidly as βbecomes
>0.3 (Fig. 14c). Thus it is of critical importance to maintain volatility of a portfolio at a
reasonably low level (e.g. <0.2).
The value of θhas moderate impact on probability of operational restriction (Fig. 14d).
For Case 1 where non-catastrophic risk dominates over catastrophic earthquake risk,
charging of higher risk premium improves insurer’s operational stability. However, it
should be reminded that this is not the case for insurer’s insolvency, for which only slight
improvement is achieved by increasing the safety loading factor (Fig. 13d).
The results shown above have several important implications on earthquake insurance
portfolio management. The insolvency requirement is significantly dependent on the initial
asset of an insurer W0. Therefore, in practice, it needs to be maintained at a reasonable level
by placing a maximum allowable limit in terms of catastrophic exposure ratio. For the con-
sidered cases, W0of 50 million CAD suppresses ruin probability below 0.002 (i.e. return
period of 500 years, which is a popular benchmark probability level for natural disaster man-
agement). Quantitative risk management based on multiple criteria requires a good balance
between non-catastrophic risk and catastrophic earthquake risk. For operational instability,
the effects of the growth rate αand volatility βcan be significant, which were found to have
only minor effects on insurer’s ruin probability. The manipulation of the safety loading factor
θis not necessarily effective in curtailing possibility of insolvency. This is because the loss
distribution has heavy right tail and surcharging of insurance premium is not sufficient to
cover potentially large earthquake loss in the future. An alternative risk transfer mechanism,
such as reinsurance and catastrophic bond, may need to be considered.
4 Summary and conclusions
This study developed a stochastic net worth model of an insurer who undertakes both ordinary
risk and catastrophic earthquake risk. The model consists of a diffusion process, approxi-
123
Bull Earthquake Eng (2012) 10:615–643 641
mated by a geometric Brownian motion model, and a catastrophic jump process, characterized
by an earthquake-engineering-based seismic loss model. The model is implemented based
on Monte Carlo simulation; thus it offers a rigorous and versatile framework to address
key uncertainty and dependence in describing a complex earthquake insurance loss process.
A distinct feature of the developed model is that many physical characteristics of strong
ground motions and seismic effects on buildings and infrastructure are incorporated ade-
quately. The limitation of the developed model includes the use of a simple geometric Brown-
ian motion model for a diffusion process, and simplifications and assumptions adopted in
the seismic loss model (e.g. structural models and damage assessment procedures). Subse-
quently, the developed model was applied to hypothetical 4000 conventional wood-frame
houses in south-western British Columbia, Canada.
The seismic loss estimation results for the 4000 houses indicate that: (i) adequate spatial
correlation model and local soil condition have significant impact on probability distribu-
tion of the aggregate seismic loss, and (ii) seismic loss dependence in right tail can be of
critical importance when two building portfolios are in the proximity geographically. These
results/observations have important practical implications in earthquake insurance portfolio
management, such as premium calculation, rate-making, reinsurance, and risk transfer.
The effects of insurance arrangements (e.g. deductible, cap, and coinsurance factor) on
insurer’s earthquake risk exposure were then investigated. The results showed that a high
deductible and low coinsurance factor curtail incurred seismic loss pay-out effectively at
wide probability levels, whereas a low cap greatly reduces the maximum earthquake risk
exposure, tapering heavy right tail of the loss distribution. By allowing insurers some flexi-
bility in determining a suitable insurance arrangement, insurers can optimize their portfolios
more efficiently and offer more affordable insurance coverage to their clients.
Finally, extensive sensitivity analyses were conducted to investigate the impact of ini-
tial surplus, growth rate and volatility of non-catastrophic risk, and safety loading factor
for earthquake insurance premium on probabilities of ruin and operational instability. The
analysis results indicate that the initial surplus has significant impact on insurer’s insol-
vency potential, while the other three parameters have relatively minor effects only. By
contrast, from a business operability perspective, the growth rate and safety loading factor
have moderate influence and in particular, the volatility has significant impact. Under certain
circumstances, risk-taking behavior might be an attractive option from operational viewpoint.
Therefore, risk managers must be careful about full implications of their decisions by taking
an adequate balance between business stability under normal conditions and solvency under
extreme conditions. To make informed and sound decisions regarding earthquake insurance
portfolio management, the developed insurance model can serve as a useful decision-support
tool.
Acknowledgments The authors are grateful to J. Ren for his generous guidance on stochastic modeling of
insurer’s net worth.
References
Adams J, Halchuk S (2003) Fourth generation seismic hazard maps of Canada: Values for over 650 Canadian
localities intended for the 2005 National Building Code of Canada (Open File 4459). Geological Survey
of Canada, Ottawa, Canada
Aoi S, Kunugi T, Fujiwara H (2004) Strong-motion Seismograph network operated by NIED: K-NET and
KiK-net. J Jpn Assoc Earthq Eng 4:65–74
Aon Benfield (2009) Insurance risk study: modeling the global market. Aon Benfield, Chicago
123
642 Bull Earthquake Eng (2012) 10:615–643
Atkinson GM, Goda K (2011) Effects of seismicity models and new ground motion prediction equations on
seismic hazard assessment for four Canadian cities. Bull Seismol Soc Am 101:176–189
Baker JW, Cornell CA (2006) Correlation of response spectral values for multicomponent ground motions.
Bull Seismol Soc Am 96:215–227
Baker JW (2011) The conditional mean spectrum: a tool for ground motion selection. J Struct Eng 137:
322–331
Bazzurro P, Park J (2007) The effects of portfolio manipulation on earthquake portfolio loss estimates. In:
Proceedings of the 10th international conference on applications of statistics and probability in civil
engineering, Tokyo, Japan
Black G, Davidson RA, Pei S, van de Lindt JW (2010) Empirical loss analysis to support definition of seismic
performance objectives for woodframe buildings. Struct Saf 32:209–219
Cassidy JF, Rogers GC (2004) Variation in ground shaking on the Fraser River Delta (Greater Vancouver, Can-
ada) from analysis of moderate earthquakes. In: Proceedings of the 13th world conference on earthquake
engineering, Vancouver, Canada, Paper 1010
Chang SE, Shinozuka M, Moore JE (2000) Probabilistic earthquake scenarios: extending risk analysis meth-
odologies to spatially distributed systems. Earthq Spectr 16:173–219
Cornell CA (1968) Engineering seismic risk analysis. Bull Seismol Soc Am 58:1583–1606
Dong W, Grossi P (2005) Insurance portfolio management. Catastrophe modeling: a new approach to man-
aging risk. Springer, New York 119–133
Ellingwood BR, Galambos TV, MacGregor JG, Cornell CA (1980) Development of a probability based load
criterion for American national standard A58. National Bureau of Standards, Washington
Ellingwood BR, Rosowsky RV, Pang W (2008) Performance of light-frame wood residential construction
subjected to earthquakes in regions of moderate seismicity. J Struct Eng 134:1353–1363
Federal Emergency Management Agency (FEMA) and National Institute of Building Sciences
(NIBS) (2003) HAZUS-Earthquake—technical manual. FEMA and NIBS, Washington
Folz B, Filiatrault A (2004) Seismic analysis of woodframe structures I: model formulation. J Struct Eng
130:1353–1360
Goda K, Hong HP (2008) Spatial correlation of peak ground motions and response spectra. Bull Seismol Soc
Am 98:354–365
Goda K, Atkinson GM (2009) Probabilistic characterization of spatially-correlated response spectra for earth-
quakes in Japan. Bull Seismol Soc Am 99:3003–3020
Goda K, Hong HP (2009) Deaggregation of seismic loss of spatially distributed buildings. Bull Earthq Eng
7:255–272
Goda K, Hong HP, Atkinson GM (2010) Impact of using updated seismic information on seismic hazard in
western Canada. Can J Civ Eng 37:562–575
Goda K, Atkinson GM (2010) Intra-event spatial correlation of ground-motion parameters using SK-net data.
Bull Seismol Soc Am 100:3055–3067
Goda K, Ren J (2010) Assessment of seismic loss dependence using copula. Risk Anal 30:1067–1091
Goda K, Atkinson GM (2011) Seismic performance of wood-frame houses in south-western British Columbia.
Earthq Eng Struct Dyn 40:903–924
Goda K, Atkinson GM, Hong HP (2011) Seismic loss estimation of wood-frame houses in south-western
British Columbia. Struct Saf 33:122–135
Goulet CA, Haselton CB, Mitrani-Reiser J, Beck JL, Deierlein GG, Porter KA, Stewart JP (2007) Evaluation
of the seismic performance of a code-conforming reinforced-concrete frame building—from seismic
hazard to collapse safety and economic losses. Earthq Eng Struct Dyn 36:1973–1997
Grace MF, Klein RW, Kleindorfer PR, Murray MR (2003) Catastrophe insurance: consumer demand, markets
and regulation. Kluwer, Boston
Hong HP, Goda K (2007) Orientation-dependent ground-motion measure for seismic-hazard assessment. Bull
Seismol Soc Am 97:1525–1538
Hunter, JA, Burns RA, Good RL, Pelletier CF (1998) A compilation shear wave velocities and borehole geo-
physical logs in unconsolidated sediments of the Fraser River Delta (Open File 3622). Geological Survey
of Canada, Ottawa, Canada
Isoda H, Hirano S, Miyake T, Furuya O, Minowa C (2007) Collapse mechanism of new wood house designed
according with minimum seismic provisions of current Japan building standard law. J Struct Constr Eng
618:167–173
Kleindorfer PR, Kunreuther HC (1999) Challenges facing the insurance industry in managing catastrophic
risks. The financing of catastrophe risk. The University of Chicago Press, Chicago 149–194
Kleindorfer PR, Grossi P, Kunreuther HC (2005) The impact of mitigation on homeowners and insurers: an
analysis of model cities. Catastrophe modeling: a new approach to managing risk. Springer, New York
167–188
123
Bull Earthquake Eng (2012) 10:615–643 643
Kuzak D, Larsen T (2005) Use of catastrophe models in insurance rate making. Catastrophe modeling: a new
approach to managing risk. Springer, New York, pp 97–118
Luco N, Bazzurro P (2007) Does amplitude scaling of ground motion records result in biased nonlinear struc-
tural drift responses?. Earthq Eng Struct Dyn 36:1813–1835
McGuire RK (2004) Seismic hazard and risk analysis. Earthquake Engineering Research Institute, Oakland
Onur T, Ventura CE, Finn WDL (2006) A comparison of two regional seismic damage estimation methodol-
ogies. Can J Civ Eng 33:1401–1409
PEER Ground Motion Selection and Modification (GMSM) Working Group (2009) Evaluation of ground
motion selection and modification methods: predicting median interstory drift response of buildings.
University of California Berkeley, Berkeley
Porter KA, Kiremidjian AS, LeGrue JS (2001) Assembly-based vulnerability of buildings and its use in per-
formance evaluation. Earthq Spectr 17:291–312
Power M, Chiou B, Abrahamson N, Bozorgnia Y, Shantz T, Roblee C (2008) An overview of the NGA project.
Earthq Spectr 24:3–21
Powers MR, Ren J (2003) Catastrophe risk and insurer solvency: a diffusion-jump analysis. Insur Risk Manag
71:239–264
Ren J (2005) Diffusion models of insurer net worth: can one dimension suffice?. J Risk Financ 6:98–117
Ross SM (2003) An elementary introduction to mathematical finance. Cambridge University Press, Cambridge
Swiss Re (2011) Natural catastrophes and man-made disasters in 2010. Sigma, Zurich, Switzerland
Tsubota M, Hashimoto T, Murachi Y, Yoshikawa H (2009) Application of seismic risk evaluation integrating
source-focused multi-scenario seismic model and capacity spectrum method to buildings. J Struct Constr
Eng (Transactions of AIJ) 55:621–629
Ventura CE, Finn WDL, Onur T, Blanquera A, Rezai M (2005) Regional seismic risk in British Columbia—
classification of buildings and development of damage probability functions. Can J Civ Eng 32:372–387
Ventura CE, Taylor G, Prion H, Kharrazi M, Pryor S (2002) Full-scale shake table studies of woodframe
residential construction. In: Proceedings of the 7th U.S. conference on earthquake engineering, Boston,
United States
White TW, Ventura CE (2006) Seismic performance of wood-frame residential construction in British Colum-
bia (Reports 06-02 and 06-03). University of British Columbia, Vancouver, Canada
Yin YJ, Li Y (2010) Seismic collapse risk of light-frame wood construction considering aleatoric and epistemic
uncertainties. Struct Saf 32:250–261
Yucemen MS, Yilmaz C, Erdik M (2008) Probabilistic assessment of earthquake insurance rates for important
structures: application to Gumusova–Gerede motorway. Struct Saf 30:420–435
123
... Coastal communities face imminent threats from highly destructive tsunamis that cause extensive economic loss (Ghobarah et al. 2006;Akiyama and Frangopol 2012;Goda and De Risi 2018;Akiyama et al. 2020;Ishibashi et al. 2021a, b). The insurance industry's role in mitigating various disaster risks has received significant attention (Goda and Yoshikawa 2012;Eren and Luş 2015;Alam et al. 2020;Simbürger et al. 2022;Wang et al. 2023). By providing financial protection, the insurance industry significantly contributes to the resilience of communities affected by various catastrophic disasters, thus enabling individuals and businesses to recover rapidly. ...
... Considering the immense impact of tsunami disasters and the crucial role of catastrophe insurance, numerous approaches have been proposed to model insurers' performance and potential insolvency. These methods include decision-making considerations that account for the cognitive aspects of humans, as proposed by behavioural economists and psychologists (Kahneman 2003;Viscusi and Evans 2006;Zhao et al. 2020), and mathematical problems associated with stochastic insurance risk processes in actuarial science (Goda and Yoshikawa 2012). In addition, various optimization techniques and statistical methods have been proposed to estimate the optimum premium rate for catastrophe insurance. ...
... Insurance companies generate revenue through premiums and investing in various financial instruments, such as stocks, bonds, and other securities. Goda and Yoshikawa (2012) proposed a mixed diffusion-jump stochastic process that accounts for both catastrophic and non-catastrophic risks to model the insurer's net worth. However, the scope of this study is limited to the discussion of revenue generated from premiums. ...
Article
Full-text available
The devastating consequences of tsunamis on coastal infrastructure have highlighted the urgent need for effective disaster risk reduction strategies. To mitigate tsunami disasters, the insurance industry plays a vital role in implementing risk transfer measures by providing financial protection against asset damage. However, the current research on catastrophe insurance policies for coastal infrastructure lacks consideration of climate change effects. It is essential to take into account the non-stationary effects of sea-level rise to develop long-term tsunami disaster mitigation measures and promote socioeconomic resilience in coastal communities. This paper aims to provide an insurance portfolio optimization framework for coastal residential buildings subjected to tsunamis considering non-stationary sea-level rise effects based on a stochastic simulation approach. A spatiotemporal probabilistic sea-level rise hazard assessment is carried out by utilizing available climate models and considering several emission scenarios. Tsunami propagation analyses under various sea-level rise cases are performed to evaluate the time-variant tsunami hazard curves based on Monte Carlo simulation. A life cycle-based stochastic insurance claim model associated with the cumulative loss of building assets is developed based on a non-stationary compound renewal process. Finally, a sample average approximation method is leveraged to estimate the optimum basic insurance premium rate by maximizing the insurer’s profit under a cost-constrained insurance purchase decision of homeowners. As a case study, the proposed framework is applied to multiple municipalities situated in the tsunami-prone region of Kochi Prefecture, Japan. Sea-level rise substantially decreases the maximum profits of tsunami insurers and increases the premium rate and ruin probability.
... Earthquakes cause lateral forces on buildings that are random and cyclic [2], depending on the type of ground motions and the characteristics of the building structure [3,4]. Furthermore, in Indonesia, the most seismically active country [5], 85% of the residences are of a wood-framed construction [6]. ...
... These data become the basis for grouping shear wall component panels in receiving lateral forces, as shown in Table 3. Based on the feasibility of shear wall component panels by the earthquake zone in Table 3, the type A design of the CLT-mangium shear wall panel component is suitable for application in low-intensity zones (2), types B, D, E1 and E2 are suitable for moderate intensity zones (3,4) and type C is suitable for severe intensity zones (5). It was believed that the strength of the horizontal board component design are lower than that of the diagonal board design. ...
Article
Full-text available
Walls, as components of the lateral-force-resisting system of a building, are defined as shear walls. This study aims to determine the behavior of shear wall panel cross-laminated-timber-based mangium wood (Acacia mangium Willd) (CLT-mangium) in earthquake-resistant prefabricated houses. The earthquake performance of CLT mangium frame shear walls panels has been studied using monotonic tests. The shear walls were constructed using CLT-mangium measuring 2400 mm × 1200 mm × 68 mm with various design patterns (straight sheathing, diagonal sheathing/45°, windowed shear wall with diagonal pattern and a door shear wall with a diagonal pattern). Shear wall testing was carried out using a racking test, and seismic force calculations were obtained using static equivalent earthquake analysis. CLT-mangium sheathing installed horizontally (straight sheathing) is relatively weak compared to the diagonal sheathing, but it is easier and more flexible to manufacture. The diagonal sheathing type is stronger and stiffer because it has triangulation properties, such as truss properties, but is more complicated to manufacture (less flexible). The type A design is suitable for low-intensity zones (2), and types B, D, E1 and E2 are suitable for moderate-intensity zones (3, 4), and type C is suitable for severe-intensity zones (5).
... Several past studies have attempted to address this problem by using a "hybrid" approach where more sophisticated dynamic models are used to capture the fragility and/or vulnerability functions of buildings in a portfolio. For instance, Goda and Yoshikawa (2012) used the SAWS model (Folz and Filiatrault 2001) for wood frame to investigate the insurance losses of building stock in British Columbia, Canada. A similar study has been performed by Ottonelli et al. (2015) for masonry structures. ...
... For scenario-based analysis, stochastic events can be generated using appropriate GMPEs and correlation models. Although the selection of appropriate GMPE and correlation models is not the focus of this study, it is extremely important to the final results as demonstrated in numerous studies in the past (see for instance: Bazzurro and Baker 2007, Goda andHong 2009, Sokolov andWenzel 2011). Once the intensity measure is determined at each site, a random sampling of probability will uniquely determine the loss. ...
Conference Paper
This paper introduces an integrated platform for seismic resilience planning and decision-support that is scalable to different sizes of urban building portfolios. The platform is implemented into a computer tool for portfolio assessment. The proposed tool is capable of providing very rapid determination of higher resolution risk analytics and mitigation measures that enables owners and stakeholders of building portfolios to gain quantitative understand of their risk exposure in multi-dimensional risk metrics at include both financial loss and recovery time, at the project onset without engaging engineering consultants for detailed design. This paper first introduces the platform tool and the motivation for its development. Then it describes an application example where the proposed tool is used to assess the seismic risk and resilience of a large network of schools in Metro Manila. The results obtained from the application highlight the substantially richer details of the risk exposure inherent in building-specific analyses, which transfers to a portfolio of similar buildings. This study illustrates the feasibility of the proposed integrated tool for generating risk analytics and risk-mitigation options for decision-support of owners of large portfolio of building properties prone to earthquake risk.
... These factors are influenced by uncertainty and low confidence in the loss estimates, which tend to increase the price of insurance policies, making them less attractive to potential customers. Although several studies on earthquake insurance have been published in the last decade (Yucemen 2005; Petseti and Nektarios 2012; Goda and Yoshikawa 2012;Asprone et al. 2013;Yoshikawa and Goda 2014;Tian and Yao 2015;Lin 2018;Goda et al. 2020;Gkimprixis et al. 2021), research on cat bonds has been limited (Burnecki and Kukla 2003;Härdle and Cabrera 2010;Ma and Ma 2013;Shao et al. 2016Shao et al. , 2017Hofer et al. 2019Hofer et al. , 2020 and mainly focused on pricing cat bond based on worldwide historical loss data sets or simulated loss data computed at low resolution. ...
Article
Full-text available
Risk-based catastrophe bonds require the estimation of losses from the convolution of hazard, exposure and vulnerability models. These models are affected by different uncertainties that arise from the definition of their input parameters. In this paper, we propose a stochastic approach to treat the uncertainty in the asset location and attributes of the exposure model. The proposed method uses the Monte Carlo sampling approach to generate a stochastic exposure database, where each asset location is generated randomly within the geometric bound of the administration, while the asset attributes (i.e. construction type and material, number of storey, building activity type and number of dwelling) are sampled from distributions built from census data. Finally, a sensitivity analysis is performed to investigate the influence of the spatial resolution of the exposure model on the average annual losses (AAL) and catastrophe bond prices. To this end, we implement four exposure models, with spatial resolution at the asset, municipality and province levels, on a study region comprising ten provinces in southern Italy. Compared to the proposed model, the exposure model where assets are relocated and aggregated at the geometric centroid of the municipality underestimates AAL by 8% , while a higher difference (up to 20% ) is observed for the exposure model where assets are relocated and aggregated at the geometric centroid of the province. We also consider an exposure model whose asset locations are extracted from publicly available building footprints. Yet, this latter model was incomplete for some provinces resulting in underestimation of AAL up to 90% . Differences in catastrophe bond prices obtained from the four exposure models are less evident, with the exposure model built based on the building footprints showing a difference up to 9.5%.
... Earthquake damage and loss assessment of asset portfolios is of critical importance for decision making in the context of disaster risk reduction (e.g., Calderón et al. 2021), pricing in the (re)insurance industry (e.g., Goda and Yoshikawa 2012), or rapid damage and loss estimation (e.g., Poggi et al. 2021). The accuracy and reliability of the seismic risk models used in these tasks are crucial, as uncertain or biased estimates may lead to insufficient mitigation measures, inadequate insurance policies, or erroneous allocation of resources in the aftermath of destructive events. ...
Article
Full-text available
Seismic risk assessment of building portfolios has been traditionally performed using empirical ground motion models, and scalar fragility and vulnerability functions. The advent of machine learning algorithms in earthquake engineering and ground motion modelling has demonstrated promising advantages. The aim of the present study is to explore the benefits of employing artificial neural networks (ANNs) in earthquake scenarios for spatially-distributed building portfolios. To this end, several recent major seismic events in the Balkan region were selected to assess damage and economic losses, considering different modelling approaches. For the assessment of the seismic demand, the ANN developed in the companion study and common ground motion models for Europe were adopted. For the vulnerability component, recent ANN models and existing scalar fragility and vulnerability functions for the Balkans were used. The estimates of all modelling cases were compared against the aggregated damage and economic loss data observed in the aftermath of these events. The findings of this study suggest that overall, the ANNs led to damage and economic loss estimates closer to the observations.
... Premium is the amount of money that the insured pays to the insurer. Deductible ( , the amount of money that the insured party needs to pay towards an insurance claim), limit ( , the highest amount of a claim covered by an insurance contract), and coinsurance factor ( , the percentage of losses paid by the insurer after the insured party pays the deductible) constitute a typical payout function [42] that determines the insurance payout ( ), as follows: ...
Article
Full-text available
Recent major earthquake disasters have highlighted the effectiveness of financial soft policies (e.g., earthquake insurance) in transferring seismic risk away from those directly impacted and complementing ‘hard’ disaster risk mitigation measures (e.g., structural retrofit). However, the benefits of existing financial soft policies are often not guaranteed. This may be attributed to: (1) their low penetration rate (e.g., in the case of earthquake insurance); (2) the fact that they typically neglect the explicit needs of low-income sectors in (developed and developing) modern societies, who are often disproportionately impacted by natural-hazard driven disasters; and/or (3) their failure to consider the time-dependent nature of urban exposure. We contribute towards addressing these shortcomings by proposing a flexible framework for designing and assessing bespoke, people-centred, household-level, compulsory financial soft policies (including conventional earthquake insurance, disaster relief fund schemes, income-based tax relief scheme, or a combination of those) across cities under rapid urban expansion. The proposed framework leverages the Tomorrow’s Cities Decision Support Environment, which aims to facilitate pro-poor disaster-risk-informed urban planning and design in developing country contexts. The framework specifically enables decision makers to strategically design and then assess the pro-poorness of mandatory soft policies, using financial impact metrics that discriminate losses on the basis of income. We showcase the framework using the hypothetical expanding city, “Tomorrowville”, successfully identifying pro-poor seismic-risk-related financial soft policies for different instances in the lifetime of the urban system.
... The HAZUS framework is specifically useful to pre-and post-disaster activity managements but it does not provide the probability distribution of the aggregated losses required, for example, by the insurance/re-insurance industry. In the last decade, many studies were dedicated to developing and enhancing the accuracy of portfolio loss estimation accounting for different aspects of the problem dealing with uncertainty treatment, spatially correlated ground motions, stochastic modeling, geographical aggregation of assets in a portfolio and generation of fragility and vulnerability functions (Aslani et al. 2012;Bazzurro and Luco 2005;Bazzurro and Park 2007;Goda and Yoshikawa 2012;Silva 2017;Silva et al. 2015;Villar-Vega et al. 2017;Weatherill et al. 2015). Most of these advancements took place in the private sector where catastrophe risk modelers responsible Content courtesy of Springer Nature, terms of use apply. ...
Article
Full-text available
This paper focuses on the exposure and fragility/vulnerability of the residential, mixed residential/commercial, and public building stock of the city of Isfahan, in Central Iran, and constitutes the first part of a seismic risk assessment study for that city. To determine the assets at risk, we first summarize the details of the building stock and population from the available georeferenced 2011 Census data. From this dataset and from a local survey of the city, we categorize the building taxonomy in 27 construction classes characterized by age, height, and material/lateral-load-resisting system. A building exposure model is then assembled by first dividing Isfahan in city blocks and then by assigning the appropriate statistical properties to the buildings, such as construction class, built area, and replacement cost. The population of each city block is also estimated and accounted for. To assess the fragility and vulnerability to earthquake ground motion, for each building class we performed nonlinear dynamic analysis of multiple equivalent single-degree-of-freedom systems. This process generated a set of class- and region-specific fragility and vulnerability functions that considered both record-to-record and building-to-building response variability. In the companion paper we used the exposure model and the fragility and vulnerability curves generated for all these asset classes to probabilistically assess the seismic risk of Isfahan.
... Time-based risk assessment is simply performed by repeating the scenario-based analysis discussed above for "all" foreseeable earthquakes that may affect the region of interest in the future and then weigh the distributions of the resulting quantities, namely damage and losses, computed for each earthquake by its likelihood of occurrence in the time period of interest (say 1 year or 50 years). As alluded to earlier, the more comprehensive results of time-based risk assessment have naturally a wider range of applicability than scenario-based ones, especially in decision making and the insurance industry (Goda 2015;Goda et al. 2014;Goda and Yoshikawa 2012;Zhao et al. 2019). Even though the present study mainly focuses on monetary and human life losses, the procedure described here holds also for any other quantity of interest such as the ones mentioned earlier (e.g., downtime, displaced people). ...
Article
Full-text available
The second part of a seismic risk assessment study for the Iranian city of Isfahan is presented, focusing on the description of the hazard, the risk analysis, and the discussion of the results. This study utilizes the building exposure model, the fragility and the vulnerability curves illustrated in the companion paper. The earthquake occurrence source model adopted is based on the EMME14 hazard study. The site effects accounting for the soil nonlinear behavior are modeled by means of a Vs30 map derived from the topographical slope. The validity of this map is tested based on the local surface geology and geotechnical reports. The probabilistic seismic hazard maps for different return periods that account for site effects are generated and compared with the design spectra mandated by the Iranian national seismic design code. In addition, direct seismic monetary and human losses are estimated for two earthquake scenarios and also for 100- and 475-year return periods. We show loss maps and loss curves, offering insights on the most vulnerable building classes and the spatial distribution of the estimated losses. The results provide a basis for pre- and post-disaster emergency planning, for global and local urban planning, as well as for conceiving adequate risk mitigation strategies including devising fair earthquake insurance policies. This study may also serve as a blueprint for carrying out similar work in other urban areas of the Middle East.
... The final outcome of a PBEE analysis is an earthquake loss model that can potentially provide relevant information to decision makers and policymakers in insurance and reinsurance industry [2][3][4][5], in emergency management and in natural hazard mitigation programs [6], or in individual versus societal and environmental risk management [7]. Moreover, such models facilitate rapid and effective economic and human loss estimation which is of utmost importance in the immediate aftermath of earthquake disasters [8,9]. ...
Chapter
Full-text available
The assessment of earthquake and risk to a portfolio, in urban or regional scale, constitutes an important element in the mitigation of economic and social losses due to earthquakes, planning of immediate post-earthquake actions as well as for the development of earthquake insurance schemes. Earthquake loss and risk assessment methodologies consider and combine three main elements: earthquake hazard, fragility/vulnerability of assets and the inventory of assets exposed to hazard. Challenges exist in the characterization of the earthquake hazard as well as in the determination of the fragilities/vulnerabilities of the physical and social elements exposed to the hazard. The simulation of the spatially correlated fields of ground motion using empirical models of correlation between intensity measures is an important tool for hazard characterization. The uncertainties involved in these elements and especially the correlation in these uncertainties, are important to obtain the bounds of the expected risks and losses. This paper looks at the current practices in regional and urban earthquake risk assessment, discusses current issues and provides illustrative applications from Istanbul and Turkey.
Article
Full-text available
This paper describes the collapse behavior of new construction of a wood house. The capacity and limit story drift which have not been discussed in past shaking table tests is presented. Repairs were made after seismic test which were conducted before collapse test to return the structural components to their original strength and stiffness. By the comparison in performance before and after repair, rehabilitation after severe earthquake is also discussed. The house survived against two-repetition of JMA Kobe and two-repetition of JR Takatori despite severe damages of finishing and bracing.
Article
Full-text available
Seismic hazard and risk assessments of spatially distributed infrastructural systems require seismic demand models that capture random but correlated simultaneous seismic effects at multiple sites. This study characterizes spatially correlated ground-motion parameters probabilistically using comprehensive databases of the K-NET and KiK-net strong-motion networks in Japan by developing a ground-motion prediction equation and then investigating the correlation structure of regression residuals from the prediction equation. Analysis results indicate that (1) interevent residuals of ground-motion parameters at different vibration periods are more strongly correlated than intraevent and total residuals with zero separation distance; and (2) intraevent spatial correlation coefficients can be described as a simple exponential decay function that is independent of the way the event-based intraevent standard deviation is calculated, of the earthquake type, and of the vibration period. The developed overall correlation model of spatially correlated ground-motion parameters may be used for seismic hazard and risk assessments in a subduction environment.
Article
Full-text available
Spatial correlation of intraevent peak ground-motion amplitudes at different sites is an important issue for seismic hazard and risk assessment of spatially distributed buildings and infrastructure. Correlated seismic effects cause acute concentration and accumulation of seismic losses, potentially resulting in a catastrophic event. We investigate the adequacy of the existing spatial correlation model of Goda and Atkinson (2009b) for Japanese earthquakes in light of strong ground motion data obtained from the SK-net database, which achieves greater spatial density of recording stations in the Kanto region. The SK-net dataset is suitable for filling in gaps in the Goda-Atkinson dataset at short separation distances where data coverage is sparse. Spatial correlations based on the SK-net data decrease gradually with increasing interstation separation distance. At short separation distances, the estimated spatial correlation data points show large variability around the average trend. Sensitivity to the manner of estimating intraevent standard deviation is significant, as positive correlations among residuals occur due to the dense spatial coverage of recording stations. The developed spatial correlation model fits the observed data reasonably well and is consistent with the Goda-Atkinson model. At short separation distances (less than 1 km), wherein empirical data are limited and estimates are uncertain, discretion is required in adopting such models for seismic hazard and risk assessment of spatially distributed structures.
Article
Full-text available
Seismicity rates and ground-motion prediction equations (GMPEs) are the key uncertainties in probabilistic seismic hazard analysis (PSHA). We explore the impact of new findings and knowledge from seismological and ground-motion studies on seismic hazard assessment for eastern and western Canada. Updated information includes the reevaluation of seismicity rates and their interpretation in terms of seismic source zones and the use of new GMPEs. We refer to our model as an interim updated seismic hazard model, as it does not treat all uncertainties comprehensively; rather, we address the impact of key uncertainties. Based on our updated interim seismic hazard model, uniform hazard spectra (UHS) at four major cities across Canada are obtained and compared with UHS in the current seismic hazard maps of Canada (2005/2010), which are based on a 1995 seismic hazard model developed by the Geological Survey of Canada. Sensitivity analysis highlights the significant impact of seismicity smoothing in low-to-moderate seismic regions (eastern Canada), while GMPEs are important for all regions. Moreover, our interim updated seismic hazard model can readily produce seismic hazard curves as well as seismic hazard deaggregation results for various site conditions and for multiple probability levels; this capability is essential for carrying out advanced earthquake engineering analyses.
Article
This paper proposes a methodology by which probabilistic risk analysis methods can be extended to the assessment of urban lifeline systems. Probabilistic hazard information is commonly used for site-specific analysis. However, for such systems as highway networks, electric power grids, and regional health care delivery systems, the spatial correlation between earthquake ground motion across many sites is important in determining system functionality. The methodology developed in this paper first identifies a limited set of deterministic earthquake scenarios and evaluates infrastructure system-wide performance in each. It then assigns hazard-consistent probabilities to the scenarios in order to approximate the regional seismicity. The resulting probabilistic scenarios indicate the likelihood of exceeding various levels of system performance degradation. A demonstration for the Los Angeles study area highway network suggests that there is roughly a 50% probability of exceedance of Northridge-level disruption in 50 years. This methodology provides a means for selecting representative earthquake scenarios for response or mitigation planning.
Article
The thick, soft soils of the Fraser River delta, just south of Vancouver, Canada, are home to critical infrastructure such as one of North America's busiest port facilities, Canada's second busiest airport, and key transportation and power-transmission facilities for 2-3 million people. This area is one of the most seismically active regions in Canada. We have utilised recordings of recent moderate (1996 M=5.1 at 180 km distance, 1997 M=4.3 at 40 km distance) and large (2001 M=6.8 at 230 km distance) earthquakes to examine site response in the greater Vancouver, region, with an emphasis on the Fraser River delta. These suites of accelerograms have relatively low amplitudes (0.015g for the 1996 records, 0.024g for the 1997 records, and 0.035g for the 2001 records). The 1997 data set is significant as it contains the first three-component recordings made on bedrock in greater Vancouver, and the 2001 data set is significant as it contains longer-period energy. We compute spectral ratios to estimate the site response for each of the soil sites. We find frequency-dependent amplification of up to 12 times (relative to competent bedrock) near the edge of the delta. Here, the amplification occurs over a relatively narrow frequency range of 1.5-4 Hz (0.25-0.67 s period). Near the centre of the delta (where the soft soils are thickest) peak amplification of 4-10 times (relative to bedrock) is measured. Relative to firm soil, the amplification ranges from 2-5 for the thick soil delta centre sites, and 2-6 for the delta edge sites. At higher frequencies, little or no amplification or even slight attenuation is observed. The Geological Survey of Canada is currently deploying a dense urban seismograph network (~1km spacing) which crosses the northern edge of the Fraser delta to address varying site response in more detail.