ArticlePDF Available

EARTHQUAKE RISK ESTIMATES FOR RESIDENTIAL CONSTRUCTION IN THE WESTERN UNITED STATES

Authors:

Abstract and Figures

This study presents relative seismic risk estimates for thirteen states: eleven coterminous west of the Rockies, Alaska and Hawaii. We focus on residential construction, considering both economic and insured exposure. The loss estimation system uses a seismic source model based on the USGS 2002 National Seismic Hazard Mapping project. Building damage is estimated via spectral response-based vulnerability functions. This model incorporates variations in site conditions, construction design levels, building inventory, and insured value. The thirteen western states are ranked in terms of their relative earthquake risk, with risk per state compared on the basis of both economic and insured loss cost. The consideration of insurance has a distinct effect on relativities due to differences in penetration rates and prevailing policy structures. This is particularly true for California; currently, the high deductibles and prices charged have driven down the purchase of earthquake policies to an extent that the insured proportion of potential earthquake losses is significantly less than it was at the time of the Northridge earthquake.
Content may be subject to copyright.
13th World Conference on Earthquake Engineering
Vancouver, B.C., Canada
August 1-6, 2004
Paper No. 1072
EARTHQUAKE RISK ESTIMATES FOR RESIDENTIAL
CONSTRUCTION IN THE WESTERN UNITED STATES
Don WINDELER1, Guy MORROW2, Chesley R. WILLIAMS3, Mohsen RAHNAMA4, Gilbert
MOLAS5, Adolfo PEÑA6, and Jason BRYNGELSON7
SUMMARY
This study presents relative seismic risk estimates for thirteen states: eleven coterminous west of the
Rockies, Alaska and Hawaii. We focus on residential construction, considering both economic and
insured exposure.
The loss estimation system uses a seismic source model based on the USGS 2002 National Seismic
Hazard Mapping project. Building damage is estimated via spectral response-based vulnerability
functions. This model incorporates variations in site conditions, construction design levels, building
inventory, and insured value.
The thirteen western states are ranked in terms of their relative earthquake risk, with risk per state
compared on the basis of both economic and insured loss cost. The consideration of insurance has a
distinct effect on relativities due to differences in penetration rates and prevailing policy structures. This
is particularly true for California; currently, the high deductibles and prices charged have driven down
the purchase of earthquake policies to an extent that the insured proportion of potential earthquake losses
is significantly less than it was at the time of the Northridge earthquake.
INTRODUCTION
Catastrophe modeling brings together a range of technical disciplines to estimate future losses from
natural disasters, rather than relying only on a potentially incomplete historic record. For the earthquake
peril these include seismology, civil and geotechnical engineering, economics, and actuarial science. Loss
1 Chief Geologist, Earthquake Hazards Practice, RMS, Inc, Newark, California, USA:
Don.Windeler@RMS.com
2 V.P. of Model Development, RMS, Inc, Newark, California, USA: Guy.Morrow@RMS.com
3 Lead Engineer/Geologist, RMS, Inc, Newark, California, USA: Chesley.Williams@RMS.com
4 V.P. Engineering & Model Dev., RMS, Inc, Newark, California, USA: Mohsen.Rahnama@RMS.com
5 Lead Engineer, RMS, Inc, Newark, California, USA: Gilbert.Molas@RMS.com
6 Technical Marketing Manager, RMS, Inc, Newark, California, USA: Adolfo.Pena@RMS.com
7 Engineer, RMS Inc., Newark, California, USA: Jason.Bryngelson@RMS.com
estimation tools influence public policy, mitigation decisions, local planning, insurance and reinsurance
purchasing and pricing.
In this study, we present relative risk estimates for the thirteen states west of the Rocky Mountains. This
analysis is noteworthy for incorporation of source modeling from the USGS 2002 National Seismic
Hazard Maps [1], a new spectral response-based approach to building vulnerability [2, 3], and NEHRP-
classified site condition data for all thirteen states. Risk estimates are presented in terms of average
annual loss cost for economic and insured exposure. We focus on residential construction in this study,
i.e. homeowners and renters only.
MODEL DESCRIPTION
Results provided in this analysis were generated from RiskLink, a proprietary insurance loss-estimation
tool. It applies an event-based approach in which a set of stochastic events with corresponding rates has
been defined, portfolio loss and variability are generated for each event, and exceedance probabilities of
portfolio losses are calculated for various economic or insurance perspectives. The inputs to the RiskLink
model are based primarily on publicly-available data and are summarized below.
Exposure and analysis resolution
Three exposure data sets were analyzed for this study. The first two comprise residential value for the
western U.S. by ZIP Code, one total economic value and one insured value. The insured portfolio
incorporates the local penetration rate of earthquake insurance purchase, policy conditions on deductibles
and limits, and coverages for structures, contents, and additional living expenses. These were estimated
from a value of public and private data sources, including insurance companies, state insurance regulators,
the California Earthquake Authority, the U.S. Census, gross domestic product, Dun & Bradstreet square
footage, Means construction costs, and other statistical factors.
The third was used to make a relative risk map and comprised a portfolio spaced on a variable grid for the
thirteen western states considered. Grid cells were of approximately 1-, 5-, or 10-km size, with finer
resolution used in areas of high exposure and/or hazard. Each cell contained a uniform value, split 65%
structure / 35% contents, with the default inventory of residential building stock for the state.
Seismic source model
The fundamental inputs for the seismic source model are the documentation and parameters developed for
the 2002 USGS National Seismic Hazard Maps. These are described for the lower 48 states, Hawaii, and
Alaska by Frankel [1], Klein [4], and Wesson [5] respectively. For the purpose of this study, the most
significant source modeling differences are the inclusion of “cascade” scenarios and/or time dependent
recurrence on selected faults.
Cascade events refer to the potential for an earthquake to “jump” between fault segments during the
rupture process. Recent examples include the 1992 Landers and 1999 Hector Mine events in the Mojave
Desert, each of which ruptured smaller faults that had previously been considered separate sources.
Incorporation of these events in the stochastic set generates earthquakes that are larger than would be
possible on any of the constituent faults. There is a balancing effect on the model, however, as allowing
the occurrence of cascades reduces the seismic moment available for smaller events and overall rates for
the fault system will decrease. The model used in this study includes the cascade scenarios detailed in the
2002 USGS maps, including those defined for the Bay Area (WGCEP [6, 7]), as well as events on seven
additional fault systems in California. Rate calculations for these follow the moment-balancing approach
of Field [8] with values weighting three different probabilities of multi-segment rupture for each system.
The USGS hazard maps assume a Poisson (time-independent) process for estimating event probability
within a given time window. A different approach to representing probability of event occurrence is the
time-dependent model. Time-dependence explicitly recognizes the time interval since the last occurrence
of an event associated with a given source and incorporates that information in estimating the probability
of a future event on that same source. As the time since the last event increases, the probability of the
event occurring in the near future will generally increase depending on the distribution used (cf Matthews
[9]). Time-dependence is used for major fault systems in the state, including the San Andreas, Hayward-
Rodgers Creek, San Jacinto, and Whittier-Elsinore fault. Key references include WGCEP [6, 7].
Ground motion attenuations vary by source type. Abrahamson [10], Boore [11], Campbell [12], and
Sadigh [13] are considered for thrust and strike-slip crustal faults, with Spudich [14] included for
extensional events. Subduction interface events combine Youngs [15], Sadigh [13], and Atkinson [16],
while intraslab ground motions are modeled with the appropriate formulations of Youngs [15] and
Atkinson [16]. Ground motion is calculated in terms of spectral acceleration from periods 0 to 4 seconds;
the value experienced for a given location is a function of the building’s predominant period.
Geotechnical site conditions
Digital geologic maps were assigned NEHRP site classes on the basis of published or inferred 30-m shear
wave velocity. The approach follows the scheme and data of Wills [17]. Over 200 map coverages were
incorporated into the site conditions dataset for the thirteen states in the study area. Resolution varies with
data availability and exposure density. Small-scale geologic maps at 1:500,000 or 1:750,000 resolution
were used for regional coverage. California was an exception, with 1:250,000 scale data used as the
lowest resolution input. For primary urban areas, the input maps used were typically 1:100,000 scale or
better.
For the purpose of analysis these data were aggregated to the ZIP Code and grid resolutions described
above. Grid resolutions for the soil data are limited by the scale of the input map, so as not to exceed the
applicability intended by the map authors. In both cases, aggregate values have been exposure-weighted
with land use / land cover attribute data.
Liquefaction and landslide susceptibilities are also incorporated into the site inputs. Liquefaction
susceptibilities were either aggregated from published maps (e.g. Knudsen [18]) or estimated from
geology using the schema of Youd [19]. Landslide susceptibilities were developed following results of
seismic hazard zoning studies by the California Geological Survey (e.g. [20]). These studies used a
Newmark approach to define a matrix relating material properties and slope to susceptibility; these
matrices were used directly where available and generalized for geologic materials elsewhere in
California, Oregon, Washington, and Utah.
Vulnerability
The vulnerability module generates an estimate of the damage to exposure at a specific location as a
function of the ground shaking for an event. The damage is expressed in terms of a mean damage ratio and
a coefficient of variation around the mean. The RiskLink model uses separate vulnerability functions for
building, contents, and time element losses.
Development of the damage functions followed the framework recently developed at the Pacific
Earthquake Engineering Research Center (PEER), first described by Cornell [2]. The PEER approach
considers both the entire spectrum of earthquake ground motion characteristics at a site and a building’s
response to that motion, and is thus referred to as spectral response-based vulnerability. See also
Rahnama [3] for additional discussion of this implementation.
Contents modeling considers both damage from the intensity of ground shaking and from distress to the
building itself. The former is more important at low shaking levels and is relatively independent of
structure type, whereas at higher shaking the structural damage will contribute to the contents loss.
Users without data on the construction class for locations in their portfolio rely on inventory curves that
store relative proportions of building types for different lines of business. For this study, the inventory
data vary by state.
RESULTS
Model results are presented in terms of annualized loss cost, which is the modeled average annual loss
normalized by the structure replacement value. Loss costs are given in units of $/$1,000 (per mille), a
metric commonly used to quote insurance coverage premium, or have been normalized to the value for all
of California.
Average annual loss (AAL) is the expected value of an exceedance probability loss distribution. It can be
thought of as the product of the loss for a given event with its probability, summed over all events in the
stochastic set. Normalizing the AAL by exposure facilitates comparison of relative risk, as it reduces
potential differences in how the total building stock value is modeled in other studies. Population
estimates from the US Census Bureau [21] are used as a proxy here for ordering of states by residential
value.
Regional Loss Metrics
Figure 1 shows normalized residential loss costs for the thirteen western States considered in this study.
This is similar to a seismic hazard map, but incorporates differences in construction inventory and
damageability. It spans four orders of magnitude, providing a synoptic view of the relative seismic risk
and context for the state and county level loss costs.
The highest point values are along the creeping section of the San Andreas fault system in central
California. Average annual loss is strongly affected by event rates and the high relative rates of moderate
earthquakes along this segment of the fault drive up the loss costs. Other notable areas include the
southeast side of the island of Hawaii, with risk due to the active volcanic flank, and the sharp NW-SE
discontinuity in eastern California. The latter case highlights the boundary between the Central Valley
sediments to the west and Sierra Nevadan batholith to the east. Most of the risk in the Central Valley is
from distant San Andreas events, the effects of which are filtered out by the predominantly hard igneous
rocks of the Sierra Nevada.
Statewide loss costs are summarized in Figure 2, with the values normalized to the loss cost for California.
These incorporate the relative distribution of population and residential construction within each state.
California has by far the highest risk, at more than three times the relative risk on a statewide basis than
Washington, the second highest. When the relative exposure is considered, California has about twenty
times the annualized economic loss from earthquake as Washington and ten times that of the rest of the
western states combined.
Figure 1. Average annual loss costs for residential construction in the western United States.
Figure 2. Statewide risk relative to California and volatility estimates.
The impact of exposure location relative to the hazard is evident. The states of Idaho, Montana, Utah, and
Wyoming share the Intermountain Seismic Belt, the roughly N-S, arcuate band of hazard illustrated in the
center of Figure 1. While the relative risk within this belt is similar, the only major city to fall in is Salt
Lake City in Utah. Consequently, the statewide loss cost is 5-10 times greater in Utah than the other three
states.
Figure 2 also includes a measure of the volatility of the annual loss. This is the coefficient of variation on
the loss cost, reflecting the range in possible losses for any given year rather than uncertainty in the actual
loss estimate. A comparison between Washington and Utah provides an illustrative example. The lost
costs are similar between the two, but the volatility in Utah is much higher because the main contributors
are rare but large losses from the Wasatch fault system. Washington, on the other hand, has a large
contribution from the more frequent intraslab earthquakes (e.g. 1949, 1965, 2001), which historically have
caused moderate damage. The high rates of occurrence on the island of Hawaii result in a relatively low
loss volatility for the state, but the overall loss cost is moderated because over 70% of the state’s populace
live on the seismically quiet island of Oahu.
Impact of Insurance
Insurance is a mechanism for risk transfer in which the property owner pays a premium to another party
for protection against some loss-causing event. The insured loss relative to economic is most significantly
affected by penetration rate and predominant policy structures. Penetration rate refers to the proportion of
property owners who choose to purchase insurance. The insurance policy is a contract defining the
conditions for payout to the insured. It typically includes deductibles (proportion of loss the insured must
absorb before the policy pays a claim) and limits (maximum amounts payable by the insurer). These may
be defined for individual coverage types, the policy as a whole, or both.
The 1994 Northridge earthquake and 1992 Hurricane Andrew events provide contrasting examples of
insured : economic loss relativities. The Northridge earthquake caused an estimated $42 bn in economic
loss, $15 of which was insured [22, 23]. In contrast, $16 bn of Hurricane Andrew’s estimated $30 bn
economic loss was insured ([24, 25] adjusted to 2001$ using [26]). The relative proportion was higher for
Andrew for reasons of both penetration and policies. Homeowners are usually required by lenders to have
fire insurance, and wind was covered as part of the standard policy at that time. Coverage for earthquake,
State Abbrev Population
(2002 est)
California CA 35,001,986
Washington WA 6,067,060
Utah UT 2,318,789
Alaska AK 641,482
Nevada NV 2,167,455
Hawaii HI 1,240,663
Oregon OR 3,520,355
Montana MT 910,372
New Mexico NM 1,852,044
Idaho ID 1,343,124
Wyoming W Y 498,830
Arizona AZ 5,441,125
Colorado CO 4,501,051
CA
WA
UT
AK
NV
HI
OR
MT
NM
ID
WY
AZ
CO
0.0
10.0
20.0
30.0
40.0
50.0
60.0
0.001 0.01 0.1 1
loss cost rel. to California
volatility
on the other hand, must be purchased separately and thus a smaller fraction of structures were insured for
Northridge. Earthquake deductibles are also much higher for earthquake, around 10% of the structure
value at the time, while a typical wind deductible in Florida was $500.
Figure 3 provides an estimate of the fraction of the total residential loss cost that would be covered by
insurance. (Note that this is the percentage of the average annual loss over all events, not the relativity
one should expect for a single event.) The ratios of insured to economic loss are calculated from absolute
average annual loss values and thus have a greater dependence on the exposure assumptions than the loss
costs; relativities between states are less uncertain than the actual percentages. The per-event correlation
in loss becomes more important once financial structures such as deductibles are considered. An
earthquake with severe localized loss might have the same economic loss as an event with lesser losses
spread out over a large area but, assuming constant deductibles, the latter event would generally have less
insured loss because more policy deductibles would have to be exceeded.
ID
NM
AZ
CO
WY MT
NV
AK
UT
HI
OR WA
CA
1%
10%
100%
0.001 0.01 0.1 1
loss cost, normalized to California
%of AAL covered by insurance
Figure 3. Modeled percentage of residential average annual loss per state covered by insurance.
Abbreviations as per Figure 2, small ‘x’ is California commercial.
In general, percentage increases with the loss cost as there is some expectation that the rate of earthquake
insurance takeup will increase in areas of higher risk. Deductibles tend to be higher as well, but the
relativities are more sensitive to assumptions in penetration. The outlier in this analysis is California,
where the current cost of homeowners’ insurance relative to coverage has greatly reduced the fraction of
earthquake losses borne by insurers. This is considered further in the Discussion below.
DISCUSSION
Factors influencing loss costs
Comparison of Figure 1 with USGS hazard maps of the area [1] supports an obvious observation: a
primary source of local variations in the relative loss costs is the source model modified by local site
conditions. At this scale, variations in the vulnerability data are largely overwhelmed by the hazard
changes. The vulnerability would show much greater differentiation if additional construction or
occupancy types were included for comparison, particularly for buildings of different heights. The
performance-based approach used for damage calculation considers the ground motion spectrum, period-
dependent site amplification, and structure period.
When considered in the aggregated context of a portfolio, the distribution of exposure relative to the
hazard becomes more crucial in determining the loss cost. Los Angeles County has both high hazard from
numerous active faults and absolute risk due to its large population, but its relative loss cost ranks lower
because the population is split into several different urban areas [27, 28].
Average annual loss or loss cost is a useful metric for comparing risk but is a collapsed version of the full
loss exceedance curve; in isolation it lacks detail on what kind of losses comprise the total. Environments
with very frequent small losses versus rare catastrophic losses could generate the same AAL, but the latter
presents a more problematic case for a homeowner or insurer.
The volatility measure shown in Figure 2 provides some idea of where a given state falls in the spectrum,
with the previous example of Washington and Utah illustrating this point. Both have similar loss costs,
with Utah showing a higher volatility. Consider three annual probability ranges, <0.2%, 0.2-1%, and
>1%, nominally equivalent to return periods of >500 years, 100-500 years, and <100 years. The average
annual loss for Washington splits approximately 25-35-40 across these ranges, reflecting the high
contribution by frequent intraslab events. Utah, in contrast, splits 50-35-15 due to rare but catastrophic
losses from Wasatch fault events.
A related point is that frequent events will contribute greatly to average annual loss, even if they do not
generate large losses. California is the extreme case for short return period losses, with over 80% derived
from the >1% probability portion of the loss curve.
Residential insurance in California
Neglecting epistemic changes from modeling, the relative loss costs discussed above are fairly constant
risk metrics. The local seismic hazard may be impacted by time-dependent recurrence after a large event,
but overall is driven by long-term tectonics and seismic moment budgets. Building vulnerability is
gradually affected by code changes and new construction, but is another factor that changes slowly. What
does change are insured loss estimates, as prevailing market conditions will drive changes to the insured
proportion of value at risk. The 1994 Northridge earthquake was a seminal event in its influence on the
US property insurance industry; much of the commentary in this section follows a recent report on the
event [23].
Following the devastating losses incurred in the Northridge earthquake, insurers moved to better
understand their risk from earthquakes and develop pricing and policy terms that reflected this risk.
Insurance availability gradually returned to equilibrium on the commercial side, but residential lines
experienced a crisis in the years following. Personal lines insurers are required to offer the option of
earthquake insurance when selling homeowners’ policies, and many insurers stopped writing insurance
due to concerns that they would not be able to remain solvent if another similar event occurred. The
compromise was the CEA (California Earthquake Authority), a government entity funded by insurers.
The CEA offers a “mini-policy” with a high deductible (15%) and limited coverage for contents and
additional living expenses. Because it is perceived to be expensive relative to the coverage provided,
many homeowners elected not to purchase a earthquake rider on their policy. Only about 17% of
homeowners currently have earthquake insurance, down from 30-40% at the time of Northridge. The
combination of higher deductibles and lower limits with fewer policyholders has reduced the proportion
of losses that will be paid out by insurers. RMS model results suggest that insured residential losses from
a repeat of the Northridge earthquake today would be 70% lower than those incurred in 1994.
The commercial market in California is less regulated than for personal lines and has continued to grow
since 1994. An equivalent analysis of the insured annual loss for commercial lines yields almost twice the
percentage covered as for residential lines; this commercial result is shown on Figure 3 with an ‘x’. The
residential markets are gradually adapting to fill the demand for earthquake coverage, as evidenced by the
recent expansion of the CEA’s product line to include policies with lower deductibles and increased
coverage for contents and ALE.
CONCLUSIONS
We have presented relative seismic risk for thirteen western states on the basis of modeled results for
residential exposure, illustrated in terms of average annual loss cost. Natural breaks in per-state results
suggest four groups. California stands far above the rest in risk, a result borne out by historical experience
and common sense. Ranked in order of decreasing risk, Washington, Utah, Alaska and Nevada comprise
the next tier. Each have significant exposure close to active seismic sources. Hawaii and Oregon follow
closely behind; both have locally high hazard, but the loci of population and exposure are not in the
highest risk zones. There is a wide range in the last group, ordered from Montana, New Mexico, Idaho,
Wyoming, Arizona, to Colorado.
Relativities in these loss cost results are similar to the absolute economic risk, but have been normalized
to exposure and thus do not provide actual dollar losses. Washington is second in absolute loss to
California, followed by Utah, Oregon, and Nevada. The remaining eight states in rank order are Hawaii,
Alaska, Arizona, New Mexico, Montana, Idaho, Colorado, and Wyoming.
Insurance coverage of residential losses generally increases with the risk for the state, with California
currently a notable exception. The impact of the Northridge earthquake is still being felt, but conditions
are evolving in the insurance market that will eventually lessen the direct impact that would be borne by
homeowners in the next major earthquake.
REFERENCES
1. Frankel A, Petersen M, Mueller C, Haller K, Wheeler R, Leyendecker EV, Wesson R, Harmsen S,
Cramer C, Perkins D, Rukstales, K. “Documentation for the 2002 National Seismic Hazard Maps.”
US Geol Survey 2002; Open-File Rpt. 02-420.
2. Cornell CA, Krawinkler H. “Progress and challenges in seismic performance assessment.” PEER
Center News 2000; 3: 2.
3. Rahnama M, Seneviratna P, Morrow GC, Rodriguez A. “Seismic performance-based loss
assessment.” Proceedings of the 13th World Conference on Earthquake Engineering, Vancouver,
Canada. Paper no. 1050. Oxford: Pergamon, 2004.
4. Klein FW, Mueller CS, Frankel AD, Wesson RL, Okubo PG. “Seismic hazard in Hawaii: high rate
of large earthquakes and probabilistic ground motion maps.” Bull Seis Soc Amer 2001; 91: 479-
498.
5 Wesson RL, Frankel AD, Mueller CS, Harmsen SC. “Probabilistic seismic hazard maps of Alaska.”
US Geol Survey 1999; Open-File Rpt 99-36.
6. Working Group on California Earthquake Probabilities. “Earthquake probabilities in the San
Francisco Bay region: 2000-2030 – a summary of findings.” US Geol Survey 1999; OFR 99-517.
7. Working Group on California Earthquake Probabilities. “Earthquake probabilities in the San
Francisco Bay region: 2002-2031.” US Geol Survey 2003; Open-File Rpt. 03-214.
8. Field EH, Jackson DD, Dolan J. “A mutually consistent seismic hazard source model for Southern
California.” Bull. Seis. Soc. Amer. 1999; 89(3): 559-578.
9. Matthews MV, Ellsworth WL, Reasenberg PA. “A Brownian model for recurrence earthquakes.”
Bull. Seis. Soc. Amer. 2002; 92(6): 2233-2250.
10. Abrahamson NA, Silva WJ, “Empirical response spectral attenuation relations for shallow crustal
earthquakes,” Seis Research Ltrs 1997; 68(1): 94-127.
11. Boore DM, Joyner WB, Fumal TE, “Equations for estimating horizontal response spectra and peak
acceleration from western North American earthquakes: a summary of recent work,” Seis Research
Ltrs 1997; 68(1): 128-153.
12. Campbell KW, Bozorgnia Y, “Updated near-source ground motion (attenuation) relations for the
horizontal and vertical components of peak ground acceleration and acceleration response spectra,”
Bulletin of the Seismological Society 2003; 93(1):314-331.
13. Sadigh K, Chang CY, Egan J, Makdisi F, Youngs R, “Attenuation relationships for shallow crustal
earthquakes based on California strong motion data.” Seis Research Ltrs 1997; 68(1): 180-189.
14. Spudich P, Joyner WB, Lindh AG, Boore DM, Margaris BM, Fletcher JB. “SEA99: A revised
ground motion prediction relation for use in extensional tectonic regimes.” Bull. Seism. Soc. Am.
1999; 89: 1156-1170.
15. Youngs RR, Chiou S-J, Silva WJ, Humphrey JR. “Strong ground motion attenuation relationships
for subduction zone earthquakes.” Seismological Research Letters 1997; 68(1): 58-73.
16. Atkinson G, Boore D. “Preliminary empirical ground motion relations for subduction zone
earthquakes.” preprint 2001.
17. Wills CJ, Petersen M, Bryant WA, Reichle M, Saucedo GJ, Tan S, Taylor G, Treiman J. “A site
condition map for California based on geology and shear wave velocity.” Bull. Seis. Soc. Amer.
2000; 90: S187-S208.
18. Knudsen KL, Sowers JM, Witter RC, Wentworth CM, Helley EJ, Nicholson RS, Wright HM,
Brown KH. “Preliminary maps of Quaternary deposits and liquefaction susceptibility, nine-county
San Francisco Bay region, California: a digital database.” US Geol Surv 2000; OFR 00-444.
19. Youd TL, Perkins DM. “Mapping of liquefaction induced ground failure potential.” Proc. of the
ASCE, Journal of the Geotechnical Engineering Division 1978; 104 (GT4): 433-446.
20. Wilson RI, Wiegers MO, McCrink TP. “Earthquake-induced landslide zones in the City and
County of San Francisco, California.” Seismic Hazard Evaluation of the City and County of San
Francisco, California. California Division of Mines & Geology 2000; Open-File Rpt. 2000-009.
21. US Census Bureau, Population Div. “Annual Estimates of the Population for the United States and
States, and for Puerto Rico: April 1, 2000 to July 1, 2003”. Table NST-EST2003-01, 2003.
22. Petak WJ, Elahi S. “The Northridge earthquake, USA, and its economic and social impacts.”
EuroConference on Global Change and Catastrophe Risk Management, 2001.
http://www.iiasa.ac.at/Research/RMS/july2000/ Papers/Northridge_0401.pdf
23. Risk Management Solutions. “The Northridge Earthquake: RMS 10-yr Retrospective.” 2004.
http://www.rms.com/Publications/NorthridgeEQ_Retro.pdf
24. Pielke, Jr. RA, Landsea CW. “Normalized Hurricane Damages in the United States: 1925-1995.”
Weather and Forecasting 1998; 13: 621-631.
25. Property Claims Service division of Insurance Services Office
26. Sahr R. “Inflation Conversion Factors for Dollars 1665 to Estimated 2013.” Oregon State
University 2002; http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahr.htm
27. FEMA. “HAZUS 99 Estimated Annualized Earthquake Losses for the United States.” FEMA 366:
Washington, DC. 2001.
28. Rowshandel B, Reichle M, Wills C, Cao T, Petersen M, Branum D, and Davis J. “Estimation of
Future Earthquake Losses in California.” California Geol. Survey 2003;
ftp://ftp.consrv.ca.gov/pub/dmg/rgmp/CA-Loss-Paper.pdf
... Additionally, even if a single "worst-case" (in terms of consequences) event is determined, its probability of occurrence should ideally be considered. Hazards can also be defined using a stochastic catalog where the full spectrum of possible scenarios and their associated occurrence probabilities (or rates) are considered (Crowley & Bommer, 2006;Eugster, Rüttener, & Liechti, 1999;Liechti, Rüttener, & Zbinden, 2000;Tantala, Nordenson, Deodatis, & Jacob, 2008;Windeler et al., 2004;J. Wu, 2017). ...
Article
Full-text available
Event‐based methods are commonly used to assess the risk to distributed infrastructure systems. Stochastic event‐based methods consider all hazard scenarios that could adversely impact the infrastructure and their associated rates of occurrence. However, in many cases, such a comprehensive consideration of the spectrum of possible events requires high computational effort. This study presents an active learning method for selecting a subset of hazard scenarios for infrastructure risk assessment. Active learning enables the efficient training of a Gaussian process predictive model by choosing the data from which it learns. The method is illustrated with a case study of the Napa water distribution system where a risk‐based assessment of the post‐earthquake functional loss and recovery is performed. A subset of earthquake scenarios is sequentially selected using a variance reduction stopping criterion. The full probability distribution and annual exceedance curves of the network performance metrics are shown to be reasonably estimated.
... In the second usage, parameter is defined as the constants and independent variables which define a mathematical The Monte Carlo simulation method, also known as stochastic modeling, can be used to generate large numbers of synthetic earthquake catalogues or stochastic event sets. Although the use of stochastic catalogues is not widely documented in scientific journals, their use for earthquake risk assessment appears to be common in the commercial sector (e.g., Zolfaghari 2000;Eugster et al. 1999;Liechti et al. 2000;Windeler et al. 2004). However, Musson (1998Musson ( , 1999a has provided an insight for earthquake engineers into the mechanism of the Monte Carlo method for the generation of stochastic earthquake catalogues and their use in PSHA. ...
Article
Full-text available
The region of Naein seismic gap zone in central Iran includes several active faults with high seismicity potential. This shows the necessity of probabilistic seismic hazard analysis (PSHA) in spite of the earthquake records leakage. The aim of this study is to conduct PSHA by generating a synthetic earthquake catalogue based on a small number of real earthquake records in Naein zone. The catalogue was generated by means of Monte Carlo method using the limited real records for the period of 1900 to 2009 AD and their statistical parameters. Afterwards, using aforementioned synthetic data we calculated Guttenberg–Richter relationships (for each active fault as linear seismic sources) and peak ground acceleration (PGA-m/s²) using appropriate attenuation relationships. Then the hazard curves for each of the seismic sources and the total hazard curve were presented. Moreover, annual probability of exceedance and return period of the earthquakes were calculated for the region. Finally, hazard map was presented for return period of 75 and 475 years which show a high level of ground acceleration in the disputed region .
... It applies an event-based approach (using a set of stochastic events with corresponding physical parameters, location, and frequency of occurrence) to generate portfolio loss and to assist in risk management. The RiskLink earthquake model for the western United States is described in depth in Windeler [3] in this volume. The RiskLink earthquake model has four principal components or modules: 1. Stochastic Event Module: This module contains a database of stochastic earthquake events. ...
Article
Full-text available
Over the last two decades, a number of blind thrust faults have been identified within the Los Angeles region in southern California. The activity levels as well as the location and the extent of these features have been much debated within the seismological community. A newly-delineated feature, the Puente Hills Thrust, has been put forward as a major source of seismic risk in the region with potential losses to the insurance industry on the order of $30 to $40 billion. This study investigates the effects of source characterization on the uncertainty/variability of financial losses to a portfolio of insurance industry exposures affected by an event on the Puente Hills Thrust. There are four parts to the financial risk assessment: the source characterization, the ground motion model, the vulnerability model and the industry exposure. The ground motion model considers the ground motion attenuation as well as the local site conditions. Site conditions are delineated through digitized surficial geology maps. The vulnerability model translates the ground motion into a damage ratio using a response spectrum approach. The industry exposure database is based on building inventory data and includes age, height, and building structure information. The two key components of source characterization examined in this study are the source geometry (location and 3-D extent) and potential segmentation/cascade models. The impacts of these parameters on event magnitudes, variations in the exposure affected as well as exposure losses are examined. An examination of potential variations in ground motion models is also included to better delineate the relative importance of the assumptions implemented during source characterization.
Article
The prediction of possible future losses from earthquakes, which in many cases affect structures that are spatially distributed over a wide area, is of importance to national authorities, local governments, and the insurance and reinsurance industries. Generally, it is necessary to estimate the effects of many, or even all, potential earthquake scenarios that could impact upon these urban areas. In such cases, the purpose of the loss calculations is to estimate the annual frequency of exceedance (or the return period) of different levels of loss due to earthquakes: so-called loss exceedance curves. An attractive option for generating loss exceedance curves is to perform independent probabilistic seismic hazard assessment calculations at several locations simultaneously and to combine the losses at each site for each annual frequency of exceedance. An alternative method involves the use of multiple earthquake scenarios to generate ground motions at all sites of interest, defined through Monte–Carlo simulations based on the seismicity model. The latter procedure is conceptually sounder but considerably more time-consuming. Both procedures are applied to a case study loss model and the loss exceedance curves and average annual losses are compared to ascertain the influence of using a more theoretically robust, though computationally intensive, procedure to represent the seismic hazard in loss modelling.
Article
Full-text available
Geologic and seismologic information is used in concert with criteria developed herein to make regional maps of liquefaction-induced ground failure potential. Two maps, a ground failure opportunity map and a ground failure susceptibility map, are combined to form the potential map. Ground failure opportunity occurs when seismic shaking is strong enough to produce liquefaction and ground failure in susceptible materials. A correlation between earthquake magnitude and maximum distance from energy source to possible liquefiable sites is used with maps of regional seismicity to prepare an opportunity map. The opportunity map has a probabilistic basis. Criteria relating liquefaction susceptibility to sediment type and setting are used with Quaternary geologic maps to derive the susceptibility map. Liquefaction-induced ground failure potential maps are useful for planning, zoning and decision making purposes. Additional geotechnical studies are required for liquefaction potential determinations at specific sites within the map units.
Article
Full-text available
In this study we used strong-motion data recorded from 1957 to 1995 to derive a mutually consistent set of near-source horizontal and vertical groundmotion (attenuation) relations for peak ground acceleration and 5%-damped pseudo-acceleration response spectra. The database consisted of up to 960 uncorrected accelerograms from 49 earthquakes and 443 processed accelerograms from 36 earthquakes of N-w 4.7-7.7. All of the events were from seismically and tectonically active, shallow crastal regions, located throughout the world. Some major findings of the study are (1) reverse- and thrust-faulting events have systematically higher amplitudes at. short periods, consistent with their higher dynamic stress drop; (2) very firm soil and soft rock sites have similar amplitudes, distinctively different from amplitudes on firm soil and firm rock sites; (3) the greatest differences in horizontal ground motion among the four site categories occur at long periods on firm rock sites, which have significantly lower amplitudes due to an absence of sediment amplification, and at short periods on firm soil sites, which: have relatively low amplitudes at large magnitudes and short distances due to nonlinear site effects; (4) vertical. ground motion exhibits similar behavior to horizontal motion for firm rock sites at long periods but has. relatively higher short-period amplitudes at short distances on firm soil sites due to a lack of nonlinear site effects, less anelastic attenuation, and phase conversions within the upper sediments. We used a relationship similar to that of Abrahamson and Silva (1997) to model hanging-wall effects but found these effects to be important only for the firmer site categories. The ground-motion relations do not include recordings from the 1999 M-w > 7 earthquakes in Taiwan and Turkey because there is, still no consensus among strong-motion seismologists as to why these events had such low ground motion. If these near-source amplitudes are later found to be atypical, their inclusion could lead to unconservative engineering estimates of ground motion. The study is intended to be a limited update of the ground-motion. relations previously developed. by us in 1994 and 1997, with the explicit purpose of providing engineers and seismologists with a mutually consistent set of near-source ground-motion relations to use in seismic hazard studies. The U.S. Geological Survey and the California Geological Survey have selected the updated relation as one of several that they are using in their 2002 revision of the U.S. and California seismic hazard maps. Being a limited update, the study does not explicitly address such topics as peak ground velocity, sediment depth, rupture directivity effects, or the use of the 30-m velocity or related National Earthquake Hazard Reduction Program site classes. These are topics of ongoing research and will be addressed in a future update.
Article
Full-text available
The paper describes procedures adopted to develop and implement building vulnerability curves to relate damage ratio (defined as dollar loss / replacement value) to spectral acceleration for individual building and portfolio loss assessment. The Performance Based Engineering framework developed by the PEER researchers is implemented through the use of Incremental Dynamic Analyses to develop the building vulnerability functions. Representative structure models are subjected to a carefully selected suite of ground motions, which are scaled so that their elastic spectral accelerations at the fundamental period of the structure are equal to a target value. The maximum inter-story drift of each story from time-history analysis is computed and related to a damage state and an associated damage ratio for both structure and non-structural components. Details of the procedure and results are presented for low and mid-rise Steel Perimeter Moment Frame buildings. The final section of the paper highlights the impact of implementing performance based vulnerability functions for portfolio loss assessment. Losses computed using performance based loss assessment (PBLA) vulnerability functions are compared to losses using MMI based curves for the same hazard characteristics. The comparisons are done for both scenario events and for annualized and return period losses of interest to the insurance industry. The results of sensitivity analyses show that the spectral response based results are qualitatively more consistent with damage patterns observed during past events.
Article
Full-text available
We present SEA99, a revised predictive relation for geometric mean horizontal peak ground acceleration and 5%-damped pseudovelocity response spec-trum, appropriate for estimating earthquake ground motions in extensional tectonic regimes, which we demonstrate to have lower ground motions than other tectonic regimes. SEA99 replaces SEA96, a relation originally derived by Spudich et al. (1996, 1997). The data set used to develop SEA99 is larger than that for SEA96, and minor errors in the SEA96 data set have been corrected. In addition, a one-step regression method described by Joyner and Boore (1993, 1994) was used rather than the two-step method of Joyner and Boore (1981). SEA99 has motions that are as much as 20% higher than those of SEA96 at short distances (5-30 km), and SEA99's motions are about 20% lower than SEA96 at longer periods (1.0-2.0 sec) and larger distance (40-100 km). SEA99 dispersions are significantly less than those of SEA96. SEA99 rock motions are on the average 20% lower than motions predicted by Boore et al. (1994) except for short distances at periods around 1.0 sec, where SEA99 motions exceed those predicted by Boore et al. (1994) by as much as 10%. Com-parison of ground motions from normal-faulting and strike-slip events in our data set indicates that normal-faulting horizontal ground motions are not significantly different from extensional regime strike-slip ground motions.
Article
One simple way of accounting for site conditions in calculating seismic hazards is to use the shear-wave velocity in the shallow subsurface to classify materials. The average shear-wave velocity to 30 m ( V 30s) has been used to develop site categories that can be used for modifying a calculated ground motion to account for site conditions. We have prepared a site-category map of California by first classifying the geologic units shown on 1:250,000 scale geologic maps. Our classification of geologic units is based on V 30s measured in 556 profiles and geological similarities between units for which we have V s data and the vast majority of units for which we have no data. We then digitized the geologic boundaries from those maps that separated units with different site classifications. V s data for California shows that several widespread geologic units have ranges of V 30s values that cross the boundaries between NEHRP-UBC site categories. The Franciscan Complex has V 30s values across UBC categories B and C with a mean value near the boundary between those two categories. Older alluvium and late Tertiary bedrock have V 30s values that range from about 300 to about 450 m/sec, across the boundary between categories C and D. To accommodate these units we have created intermediate categories, which we informally call BC and CD. Geologic units that have, or are interpreted to have, V 30s values near the boundary of the UBC categories are placed in these intermediate units. In testing our map against the available V 30s measurements, we have found that 74% of the measured V 30s values fall within the range assigned to the V s category where they fall on the map. This ratio is quite good considering the inherent problems in plotting site-specific data on a regional map and the variability of physical properties in geologic units. We have also calculated the mean and distribution of V 30s for each of our map units and prepared composite profiles, showing the variation of V s in the upper 100 m from the available V s data. These data show that the map categories that we have defined based on geologic units have different V s properties that can be taken into account in calculating seismic hazards.
Article
A previous attempt to integrate geological, geodetic, and observed seismicity data into a probabilistic-hazard source model predicted a rate of magnitude 6 to 7 earthquakes significantly greater than that observed historically. One explanation was that the discrepancy, or apparent earthquake deficit, is an artifact of the upper magnitude limit built into the model. This was controversial, however, because removing the discrepancy required earthquakes larger than are seen in the geological record and larger than implied from empirical relationships between fault dimension and magnitude. Although several articles have addressed this issue, an alternative, integrated source model without an apparent deficit has not yet appeared. We present a simple geologically based approach for constructing such a model that agrees well with the historical record and does not invoke any unsubstantiated phenomena. The following factors are found to be influential: the b-value and minimum magnitude applied to Gutenberg-Richter seismicity; the percentage of moment released in characteristic earthquakes; a round-off error in the moment-magnitude definition; bias due to historical catalog incompleteness; careful adherence to the conservation of seismic moment rate; uncertainty in magnitude estimates obtained from empirical regressions; allowing multi-segment ruptures (cascades); and the time dependence of recurrence rates. The previous apparent deficit is shown to have resulted from a combination of these factors. None alone caused the problem nor solves it. The model presented here is relatively robust with respect to these factors.
Article
Attenuation relationships are presented for peak acceleration and response spectral accelerations from shallow crustal earthquakes. The relationships are based on strong motion data primarily from California earthquakes. Relationships are presented for strike-slip and reverse-faulting earthquakes, rock and deep firm soil deposits, earthquakes of moment magnitude M 4 to 8+, and distances up to 100 km.
Article
In this paper we summarize our recently-published work on estimating horizontal response spectra and peak acceleration for shallow earthquakes in western North America. Although none of the sets of coefficients given here for the equations are new, for the convenience of the reader and in keeping with the style of this special issue, we provide tables for estimating random horizontal-component peak acceleration and 5 percent damped pseudo-acceleration response spectra in terms of the natural, rather than common, logarithm of the ground-motion parameter. The equations give ground motion in terms of moment magnitude, distance, and site conditions for strike-slip, reverse-slip, or unspecified faulting mechanisms. Site conditions are represented by the shear velocity averaged over the upper 30 m, and recommended values of average shear velocity are given for typical rock and soil sites and for site categories used in the National Earthquake Hazards Reduction Program's recommended seismic code provisions. In addition, we stipulate more restrictive ranges of magnitude and distance for the use of our equations than in our previous publications. Finally, we provide tables of input parameters that include a few corrections to site classifications and earthquake magnitude (the corrections made a small enough difference in the ground-motion predictions that we chose not to change the coefficients of the prediction equations).
Article
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than . In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of “interaction” effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step perturbations occur. Transient effects may be much stronger than would be predicted by the “clock change” method and characteristically decay inversely with elapsed time after the perturbation.
Article
We present attenuation relationships for peak ground acceleration and response spectral acceleration for subduction zone interface and intraslab earthquakes of moment magnitude M 5 and greater and for distances of 10 to 500 km. The relationships were developed by regression analysis using a random effects regression model that addresses criticism of earlier regression analyses of subduction zone earthquake motions. We find that the rate of attenuation of peak motions from subduction zone earthquakes is lower than that for shallow crustal earthquakes in active tectonic areas. This difference is significant primarily for very large earthquakes. The peak motions increase with earthquake depth and intraslab earthquakes produce peak motions that are about 50 percent larger than interface earthquakes.