ArticlePDF Available

Standardized Measurement Approach for Operational Risk: Pros and Cons

Authors:

Abstract and Figures

This response has been put together by academics and in total independence of any corporate or individual interests. Our results are solely driven by scientific analysis and presented in the interest of the financial and business community, both the regulated entities and the regulators alike. The response addresses the Standardised Measurement Approach (SMA) proposed in the Basel Committee for Banking Supervision consultative document “Standardised Measurement Approach for operational risk” (issued in March 2016 for comments by 3 June 2016); and closely related Operational risk Capital-at-Risk (OpCar) model proposed in the Committee consultative document “Operational risk – revisions to the simpler approaches”, October 2014. The structure of this response involves a collection of summary results and comments for studies performed on the proposed SMA model which include: • Capital instability; • Capital sensitivity; • Reduction of risk responsivity and interpretability; • Incentivized risk taking; • Discarding key sources of Operational risk data; • Possibility of super additive capital under SMA. The detailed analysis of these points is developed in the manuscript [Peters et al, 2016] SSRN: http://ssrn.com/abstract=2788920 The response then concludes with suggestions relating to maintaining the AMA internal model framework with standardization recommendations that could be considered to unify internal modelling of Operational risk.
Content may be subject to copyright.
Electronic copy available at: http://ssrn.com/abstract=2789006
1
Standardized Measurement Approach for Operational risk:
Pros and Cons
Gareth W. Peters1, Pavel V. Shevchenko1,2, Bertrand Hassani3 and Ariane Chapelle1,4
1 Department of Statistical Sciences, University College London, UK, email: gareth.peters@ucl.ac.uk
2 CSIRO Australia, email: pavel.shevchenko@csiro.au
3Université Paris 1 Panthéon-Sorbonne, email: bertrand.hassani@univ-paris1.fr
4 Department of Computer Science, University College London UK, email: a.chapelle@ucl.ac.uk
3 June 2016
This response has been put together by academics and in total independence of any corporate or
individual interests. Our results are solely driven by scientific analysis and presented in the interest of
the financial and business community, both the regulated entities and the regulators alike. The response
addresses the Standardised Measurement Approach (SMA) proposed in the Basel Committee for
Banking Supervision consultative document “Standardised Measurement Approach for operational
risk” (issued in March 2016 for comments by 3 June 2016) [BCBSd355,2016]; and closely related
Operational risk Capital-at-Risk (OpCar) model proposed in the Committee consultative document
“Operational risk – revisions to the simpler approaches, October 2014 [BCBSd291,2014].
The structure of this response involves a collection of summary results and comments for studies
performed on the proposed SMA model which include:
Capital instability;
Capital sensitivity;
Reduction of risk responsivity and interpretability;
Incentivized risk taking;
Discarding key sources of Operational risk data;
Possibility of super additive capital under SMA.
The detailed analysis of these points is developed in the manuscript [Peters et al, 2016].
The response then concludes with suggestions relating to maintaining the AMA internal model
framework with standardization recommendations that could be considered to unify internal
modelling of Operational risk.
SMA Introduces Capital Instability
Our analysis of the SMA and OpCar model shows that SMA fails to achieve the objective of capital
stability. Consider a simple representative model for a bank’s annual Operational risk loss process
comprised of the aggregation of two generic loss processes, one high frequency with low severity loss
amounts and the other corresponding to low frequency and high severity loss amounts given by
Poisson-Gamma and Poisson-Lognormal models respectively. We set the business indicator (BI)
constant to Euro 2 billion at half way within the interval for bucket 2 of the SMA, we kept the model
parameters static over time and simulated a history of 1,000 years of loss data for three differently sized
banks (small, medium and large) using different parameter settings for the loss models to characterize
such banks. For a simple analysis we set a small bank corresponding to capital in the order of Euro 10’s
of million average annual loss, a medium bank in the order of Euro 100’s of million average annual loss
and a large bank was set to have in the order of Euro 1billion average annual loss. We then studied the
variability that may arise in the capital under the SMA formulation, under the optimal scenario that
models did not change, model parameters were not recalibrated and the business environment did not
Electronic copy available at: http://ssrn.com/abstract=2789006
2
change significantly, in the sense that BI was kept constant. In this case we observe the core variation
that arise just from loss history experience of many banks of the three different sizes over time.
Our analysis shows that, a given institution can experience the situation in which its capital can more
than double from one year to the next, without any changes to the parameters, the model or the BI
structure. Annual variation can be as large as 2 times the long-term average capital (Figure 1).
It follows from the results in this sections analysis that two banks, with the same risk profile, can
produce SMA capital numbers differing by a factor of more than 2.
Details of the analysis are available on the complete paper posted on ssrn.com (see [Peters et al, 2016])
and the code used is available upon request. In summary, the simulation takes the case of BI fixed over
time, the loss model for the institution is fixed according to two loss processes given by Poisson(λ)-
Gamma(α,β) and Poisson(λ)-Lognormal(µ,σ). In this example, Gamma(α,β) is the distribution of the loss
severities with the mean equal to αβ and the variance αβ2, and the Lognormal(µ,σ) is the distribution of
severities with the mean of the log severity equal to µ and the variance of the log severity equal to σ2.
The total institutions losses are set to be on average around 1,000 per year with 1% coming from heavy
tailed loss process Poisson-Lognormal component. We perform two case studies, one in which the
shape parameter of the heavy tailed loss process component, LogNormal is with σ = 2.5 and the other
with σ = 2.8. We summarize the settings for the two cases below in Table 1 and Table 2.
The ideal situation that would indicate that SMA was not producing capital figures which were too
volatile would be if each of the sub-figures below in Figure 1 and Figure 2 were very closely constrained
around 1. However, as we see from the analysis, the variability in capital from year to year, in all size
institutions can be significant. In particular medium to large size institutions both demonstrated that in
any given year under SMA, the capital required to be held could double.
Table 1. Case 1: σ = 2.5.
MODEL
Parameters
Mean Annual Loss
(Euro Million)
Annual Loss Capital
(99.9% VaR)
(Euro Million)
Poisson-Lognormal
Loss Component
(λ, σ , µ) =
(10, 2.5, {10; 12; 14})
15 (small)
136 (medium)
769 (Large)
260 (small)
1,841 (medium)
14,610 (Large)
Poisson-Gamma
Loss Component
(λ, α, β) =
(990, 1, {10,000;100,000; 500,000})
Table 2. Case 2: σ = 2.8.
MODEL
Parameters
Mean Annual
Loss
Annual Loss Capital
(99.9% VaR)
(Euro Million)
Poisson-Lognormal
Loss Component
(λ, σ , µ) =
(10, 2.8, {10; 12; 14})
21 (small)
772 (small)
5,457 (medium)
41,975 (Large)
Poisson-Gamma
Loss Component
(λ, α, β) =
(990, 1, {10,000;100,000; 500,000})
3
Figure 1. Ratio of the SMA capital to the long term average (Case 1)
4
Figure 2. Ratio of the SMA capital to the long term average (Case 2)
These results demonstrate examples of typical variability in capital that can be experienced with the
new SMA formulation.
5
Understanding Variability in BI when SMA matches AMA
As a second study of the SMA capital instability we again consider a Poisson-Lognormal loss process
model Poisson(λ)-Lognormal(µ,σ). Except in this case, instead of fixing the BI to the midpoint of Bucket
2 of the SMA formulation, we instead numerically solve for the BI that would match the SMA capital to
the Value-at-Risk for a Poisson-LogNormal Loss Distributional Approach (LDA) model at the annual
99.9% quantile level.
In other words, for these simulations we obtain the BI such that the LDA capital will match SMA capital
in the long term. This is achieved by solving the following non-linear equation numerically via root
search for the BI. That is we solve for the BI such that SMA(BI) = VaR(0.999).
For this experiment we took 0.999 VaR under Poisson-Lognormal model according to the single loss
approximation given by
21 2
1
exp
999.01
1exp)999.0(
VaR
where
)(
1
is the standard Normal cdf.
The results of this analysis are presented in Table 3 for λ = 10 and the results for the LogNormal σ and µ
parameters are varied in the table to present different BI values.
Table 3.
Implied BI in billions
λ =10
µ \ σ
1.5
1.75
2
2.25
2.5
2.75
3
10
0.06
0.14
0.36
0.89
2.41
5.73
13.24
12
0.44
1.05
2.61
6.12
14.24
32.81
72.21
14
2.52
5.75
13.96
33.50
76.63
189.22
479.80
In addition, we consider a second capital instability study where we use the BI obtained from matching
the long term average SMA capital with the long term LDA capital, as described above for an example
generated by Poisson(10)-Lognormal(µ =12, σ =2.5) and corresponding solved BI=14.714 bln (bucket
4). In this case SMA capital (ideal where averaging over many years rather than 10 years) is 1.87 bln is
about the same as LDA 0.999 quantile = 1.87 (calculated through Monte Carlo). Then the year on year
variability in the capital with this combination of implied BI and Poisson-LogNormal loss model is given
in Figure 3. It shows that again we get capital instability with capital doubling from year to year
compared to the long term average SMA capital.
6
Figure 3. Ratio of the SMA capital to the long term average.
SMA is Excessively Sensitive to Behaviour of Dominant Loss Process
In this section we consider an institution with a wide range of different types of Operational risk loss
processes present in each of its business units and risk types. As in our first study above, we again
consider in a stylized manner to split these loss processes into two categories, high frequency and low
severity, and low frequency high severity types. As in the previous illustration we consider a simplified
banking structure with two loss processes given by Poisson(λ)-Gamma(α,β) and Poisson(λ)-
Lognormal(µ,σ). In this study we consider the sensitivity of SMA capital to the dominant loss process.
More precisely, we study the sensitivity of SMA capital to the parameter that dictates how heavy the tail
of the most extreme loss process will be. This analysis is based on the simulation setup presented in
Table 4 with simulations performed over 1,000 years.
0.4
0.9
1.4
1.9
2.4
0200 400 600 800 1000
(SMA Capital)/(Long Term Average SMA Capital)
Simulated years
7
Table 4. Case 3: Varying σ.
MODEL
Parameters
Poisson-Lognormal
Loss Component
(λ, σ , µ) = (10, {2; 2.25; 2.5; 2.75; 3} , 14)
Poisson-Gamma
Loss Component
(λ, α, β) = (990, 1, 500,000)
Figure 4. Ratio of the SMA capital to the long term average (Case 3 SMA sensitivity)
These results in Figure 4 can be interpreted to mean that banks with more extreme loss experiences as
indicated by heavier tailed dominant loss processes (increasing σ) tend to have significantly greater
capital instability compared to banks with less extreme loss experiences. Importantly, these findings
demonstrate how non-linear this increase in SMA capital can be as the heaviness of the dominant loss
process tail increases. For instance, banks with relatively less heavy tailed dominant loss processes
(σ=2) tend to have capital variability year on year of between 1.1 to 1.4 multipliers of long term average
SMA capital. Already this is not a good outcome. However, banks with relatively heavy tailed dominant
loss processes (σ = 2.5, 2.75 or 3) tend to have excessively unstable year on year capital figures, with
variation in capital being as bad as 3 to 6 times multipliers of the long term average SMA capital.
Furthermore, it is clear that when one considers each boxplot as representing a population of banks
with similar dominant loss process characteristics, then as the tail-heaviness of the dominant loss
process in each population increases the population distribution of capital becomes increasingly
skewed and demonstrates increasing kurtosis in the right tail, with our findings demonstrating that this
can result in excessive variability in capital year on year for banks with heavy tailed dominant loss
processes.
Therefore, SMA fails to achieve the claimed objective of robust capital estimation. Capital
produced by the proposed SMA approach will be neither stable nor robust with worsening robustness
as the severity of Operational risk increases. In other words, banks with higher severity Operational
risk exposures will be substantially worse of in SMA approach with regard to capital sensitivity.
8
SMA Reduces Risk Responsivity and Interpretability
One can consider that the SMA capital is less responsive to risk drivers and the variation in loss
experience that is observed in a bank at granularity of the Basel II 56 Business Unit Risk Type units of
measure.
This is due to the naive approach of modeling at the level of granularity assumed by the SMA which is
only capturing variability at the institution level and not the intra-variability within the institution at
division or business unit levels explicitly. Choosing to model at institution level, rather than the units of
measure or granularity of the 56 Basel categories reduces model interpretability and reduces risk
responsivity of the capital.
Conceptually, it relates to the simplification of the Advanced Measurement Approach (AMA) to instead
the SMA adopting a top down formulation that reduces Operational risk modelling to a single unit of
measure, as if all operational losses were following a single generating mechanism. This is equivalent
to considering that earthquakes, cyber-attacks and human errors are all generated by the same drivers
and manifest in the loss model and loss history in the same manner as other losses that are much more
frequent and have lower consequence, such as credit card fraud, when viewed from the institution level
loss experience. It follows quite obviously that the radical simplification and aggregation of such
heterogeneous risks in such a simplified model cannot claim the benefit of risk-sensitivity, even
remotely.
Therefore, SMA fails to achieve the claimed objective of capital risk sensitivity. Capital produced
by the proposed SMA approach will be neither stable nor related to the risk profile of an institution.
SMA Incentivizes Enhanced Risk-Taking
Besides extreme conceptual flaws, the SMA induces risk-taking behaviors, failing to achieve the
Basle committee objectives of stability and soundness of the financial institutions.
Moral hazard and other unintended consequences are:
- More risk-taking: without the possibility of capital reduction for better risk management, in the
face of increased funding costs due to the rise in capital, it is predictable that financial
institutions will raise their risktaking to a level sufficient enough to pay for the increased cost
of the new fixed capital. The risk appetite a financial institution would mechanically increase.
This effect goes against the Basel Committee objective of a safe and secured financial system.
- Denying loss events: whilst incident data collection is a constant effort for over a decade in
every institutions, large or small, the SMA is the most formidable disincentive to report losses.
There are many opportunities to compress historical losses such as ignoring, slicing, transferring
to other risk categories. The wish expressed in the consultation that “Banks should use 10 years
of good-quality loss data is actually meaningless if the collection can be gamed. Besides, what
about new banks or BIA banks which do not have any loss data collection process as of today?
- Hazard of reduced provisioning activity: provisions, which should be a substitution for
capital, are vastly discouraged by the SMA, as they are penalized twice, counted both in the BI
and in the losses, and not accounted for as a capital reduction. The SMA captures both the
expected loss and the unexpected loss, when the regulatory capital should only reflect the
unexpected loss. We believe that this confusion might come from the use of the OpCar model as
a benchmark because the OpCar captures both equally.
The SMA states in the definition of “Gross loss, net loss, and recovery definitions” on page 10
Section 6.2 of [BCBSd355,2016] under item (c) that the loss data set gross loss and net loss should
9
include “Provisions or reserves accounted for in the P&L against the potential operational loss
impact”. This clearly indicates the nature of the double counting of this component since they
will enter in the BI through the PnL and in the loss data component of the SMA capital.
- Ambiguity in provisioning and resulting capital variability: the new guidelines on
provisioning under SMA framework follow similar general concept as those that recently came
into effect in credit risk with the International Financial Reporting Standard (IFRS9) set forward
by the International Accounting Standards Board (IASB), who completed the final element of its
comprehensive response to the financial crisis with the publication of IFRS 9 Financial
Instruments in July 2014. The IFRS9 guidelines explicitly outline in Phase 2 an impairment
framework which specifies in a standardized manner how to deal with delayed recognition of
(in this case) credit losses on loans (and other financial instruments).
IFRS9 achieves this through a new “…expected loss impairment model that will require more
timely recognition of expected credit losses. Specifically, the new Standard requires entities to
account for expected credit losses from when financial instruments are first recognised and it
lowers the threshold for recognition of full lifetime expected losses.
However the SMA Operational risk version of such provisioning concept for Operational risk
losses fails to provide such a standardized and rigorous approach. Instead the SMA framework
simply states that loss data bases should now include
“Losses stemming from operational risk events with a definitive financial impact, which are
temporarily booked in transitory and/or suspense accounts and are not yet reflected in the P&L
(“pending losses”). Material pending losses should be included in the SMA loss data set within a
time period commensurate with the size and age of the pending item.”
However, unlike the more specific IFRS9 accounting standards, under the SMA there is a level of
ambiguity. Furthermore, this ambiguity can propagate now directly into the SMA capital
calculation causing potential for capital variability and instability.
For instance, there is no specific guidance or regulation requirements to standardize the manner
in which a financial institution decides what is to be considered as definitive financial impact
and what they should consider as a threshold for deciding on existence of a “material pending
loss” nor what is specifically the guidance or rules relating to time periods related to inclusion of
such pending losses in an SMA loss data set and therefore into the capital. The current guidance
simply states “Material pending losses should be included in the SMA loss data set within a time
period commensurate with the size and age of the pending item”. This is too imprecise and may
lead to manipulation of provisions reporting and categorization that will directly reduce the SMA
capital over the averaged time periods in which the loss data component is considered.
Furthermore, if different financial institutions adopt differing provisioning rules, the capital
obtained for two banks with identical risk appetites and similar loss experience could differ
substantially just as a result of their provisioning practices.
- Imprecise Guidance on Timing Loss Provisioning:
The SMA guidelines also introduce an aspect of “Timing Loss Provisioning” in which they state:
“Negative economic impacts booked in a financial accounting period, due to operational risk
events impacting the cash flows or financial statements of previous financial accounting periods
(timing losses”). Material “timing losses” should be included in the SMA loss data set when they
are due to operational risk events that span more than one financial accounting period and give
rise to legal risk.”
However, we would argue that for standardization of a framework there needs to be more
explicit guidance as to what constitutes a “Material timing loss”. Otherwise, different timing loss
provisioning approaches will result in different loss databases and consequently can result in
differing SMA capital just as a consequence of the provisioning practice adopted. In addition, the
10
ambiguity of this statement doesn’t make it clear as to whether such losses may be accounted
for twice.
- Grouping of Losses: Under previous AMA internal modelling approaches the unit of
measurement or granularity of the loss modelling was reported according to the 56 Business
Unit and Risk Type categories specified in Basel II. However, under the SMA the unit of measure
is just at the institution level so the granularity of the loss processes modelling and
interpretation is lost. This has consequences when it is considered in light of the new SMA
requirement that
“Losses caused by a common operational risk event or by related operational risk events over
time must be grouped and entered into the SMA loss data set as a single loss.”
Previously, in internal modelling losses within a given Business Unit Risk Type would be
recorded as a random number (frequency model) of individual independent loss amounts
(severity model). Then, for instance under an LDA model such losses would be aggregated only
as a compound process and the individual losses would not be “grouped” except on the annual
basis and not on the per-event basis. However, there seems to be a marked difference in the SMA
loss data reporting on this point, under the SMA it is proposed to aggregate the individual losses
and report them in the loss database as a “single grouped” loss amount. This is not advisable
from a modelling or an interpretation and practical risk management perspective.
Furthermore, the SMA guidance states
“The bank’s internal loss data policy should establish criteria for deciding the circumstances,
types of data and methodology for grouping data as appropriate for its business, risk
management and SMA regulatory capital calculation needs.”
One could argue that if the aim of SMA was to “Standardize” Operational risk loss modelling in
order to make capital less variable due to internal modelling decisions, then one can fail to see
how this will be achieved with imprecise guidance such as the one provided above. One could
argue that the above generic statement on criteria establishment basically removes the internal
modelling framework of AMA and replaces it with internal heuristic (non-model based, non-
scientifically verifiable) rules to “group” data. This has the potential to result in even greater
variability in capital than was experienced with non-standardized AMA internal models. At least
under AMA internal modelling, in principle the statistical models could be scientifically
criticized.
- Ignoring the future: all forward-looking aspects of risk identification, assessment and
mitigation such as scenarios and emerging risks have disappeared in the new consultation. This
in effect introduces the risk of setting back the banking institutions in their progress towards a
better understanding of the threats, even though such threats may be increasing in frequency
and severity and the bank exposure to such threats may be increasing due to business practices,
this cannot be reflected in the SMA framework capital. In that sense, the SMA is only backward
looking.
SMA Fails to Utilize Range of Data Sources and Fails to Provide Risk Management Insight
Both Basel II and Basel III regulations emphasize the significance of incorporating a variety of loss data
into Operational risk modelling and therefore ultimately into capital calculations. The four primary data
sources to be included are Internal Loss Data, External Loss Data, Scenario Analysis and Business
Environment and Internal Control Factors (BEICF), where only the first data source is in SMA and the
others are not.
To understand the importance of BEICF data in the form of Key Risk Indicators (KRIs), Key Performance
Indicators (KPIs) and Key Control Indicators (KCIs) we first briefly recall their properties.
11
A KRI is a metric of a risk factor. It provides information on the level of exposure to a given operational
risk of the organization at a particular point in time. KRIs are useful tools for business lines managers,
senior management and Boards to help monitor the level of risk taking in an activity or an organization,
with regard to their risk appetite.
Performance indicators, usually referred to as KPIs, measure performance or the achievement of
targets. Control effectiveness indicators, usually referred to as KCIs, are metrics that provide
information on the extent to which a given control is meeting its intended objectives. Failed tests on key
controls are natural examples of effective KCIs.
KPIs, KRIs and KCIs overlap in many instances, especially when they signal breaches of thresholds: a
poor performance often becomes a source of risk: poor technological performance such as system
downtime for instance becomes a KRI for errors and data integrity. KPIs of failed performance provide
a good source of potential risk indicators. Failed KCIs are even more obvious candidates for preventive
KRIs: a key control failure always constitutes a source of risk.
Indicators can be used by organizations as a means of control to track changes in their exposure to
operational risk. When selected appropriately, indicators ought to flag any change in the likelihood or
the impact of a risk occurring.
For financial institutions that calculate and hold operational risk capital under more advanced
approaches such as the previous AMA internal model approaches, KPIs, KRIs and KCIs are advisable
metrics to capture BEICF. While the definition of BEICF differs from one jurisdiction to another and in
many cases is specific to individual organizations, these factors must:
be risk sensitive (here the notion of risk goes beyond incidents and losses);
provide management with information on the risk profile of the organization;
represent meaningful drivers of exposure which can be quantified; and
should be used across the entire organization.
While some organizations include the outputs of their risk and control self-assessment programs under
their internal definition of BEICF’s, indicators are an appropriate mechanism to satisfy these
requirements, implying that there is an indirect regulatory requirement to implement and maintain an
active indicator program, see discussion in [Chapelle, 2013].
For instance, incorporating BEICF’s into Operational risk modelling is a reflection of the modelling
assumption that one can see Operational risk as a function of the control environment. If the control
environment is fair and under control, large operational losses are less likely to occur and Operational
risk can be seen as under control. Therefore, understanding the firm’s business processes, mapping the
risks on these processes and assessing how the controls implemented behave is the fundamental role
of the Operational risk manager. However, the SMA does not provide any real incentive mechanism
firstly for undertaking such a process and secondly for incorporating this valuable information into the
capital calculation.
In terms of using pieces of information such as BEICF’s and Scenario data, since under the SMA
framework the level of model granularity is only at institution level, it does not easily lend itself to
incorporation of these key Operational risk data sources.
To business lines managers, KRIs help to signal a change in the level of risk exposure associated with
specific processes and activities. For quantitative modellers, key risk indicators are a way of including
BEICF into operational risk capital. However, since BEICF data does not form a component of required
data for the SMA model, there is no longer a regulatory requirement or incentive under the proposed
SMA framework to make efforts to develop such BEICF data sources. Therefore, this not only reduces
the effectiveness of the risk models through the loss of a key source of information, but in addition the
12
utility of such data for risk management practitioners and managers is reduced as this data is no longer
collected with the same required scrutiny, including validation, data integrity and maintenance and
reporting that was previously required for AMA internal models using such data.
These key sources of Operational risk data are not included in the SMA and furthermore cannot easily
be incorporated into an SMA framework even if there were a desire to do so due to the level of
granularity implied by the SMA. This makes capital calculations less risk sensitive. Furthermore, the
lack of scenario based data incorporated into the SMA model makes it less forward looking and
anticipatory as an internal model based capital calculation framework.
SMA can be a super additive capital calculation
The SMA seems to have the unfortunate feature that it may produce capital at a group level compared
to the institutional level in a range of jurisdictions which has the property that it is super additive. To
understand this, the following two examples should be helpful.
In each case, consider two banks with identical BI and identical Loss Component (LC). However, the
first bank has only one entity where as the second has two entities. The two entities of the second bank
have the same BI and the same LC, and those are equal to both half the BI and half the LC of the first
joint bank.
In case one (Table 1) we consider the situation of a bucket shift, where the SMA capital obtained for the
joint bank is 5,771 million while the sum of the SMA capital obtained for the two entities of the second
bank is only 5,387 million. In this example, the SMA does not capture a diversification benefit, on the
contrary, it assumes that the global impact of an incident is larger than the sum of the parts.
In the second case (Table 2) we consider no bucket shift between the joint bank and the two entity bank.
Again in this case we see that the joint bank has an SMA capital of 11,937mil, whereas the two entity
bank has an SMA capital of 10,674mil capital. Again there is a super additive property.
Table 5: Super-additivity issue illustrated (in million)
Bucket Shift
Bank 1
Component
Group
BIC
6920
BI=32,000
LC
4000
SMA
5771
Bank 2
Component
Entity 1
Entity 2
BIC
3120
3120
BI=32,000
LC
2000
2000
SMA
2694
2694
Sum SMA
5387
13
Table 6: Super-additivity issue illustrated (in million)
Same Bucket
Bank 1
Component
Group
BIC
17940
BI =
70,000
LC
4000
SMA
11937
Bank 2
Component
Entity 1
Entity 2
BIC
7790
7790
BI=70,000
LC
2000
2000
SMA
5337
5337
Sum SMA
10674
To conclude this section we state a mathematical expression that a bank could utilize in business
structure planning to decide in the long term if it will be advantageous under the new SMA framework
to split into two entities (or more) or remain in a given joint structure, according to the cost of funding
Tier I SMA capital.
For illustration we can assume the joint institution is simply modelled by a Poisson-Lognormal model
Poisson(λJ)-Lognormal(µJ,σJ) with parameters sub-indexed by J for the joint institution and a BIc for the
joint institution denoted by BIcJ. Furthermore, we assume that if the institution had split into two
separate entities for Tier I capital reporting purposes then each would have its own stylized annual loss
modelled by two independent Poisson-Lognormal models: Entity 1 denoted Poisson(λ1)-
Lognormal(µ1,σ1); and Entity 2 denoted by Poisson(λ2)-Lognormal(µ2,σ2) with BIc1 and BIc2,
respectively.
Consider the long term SMA capital behavior averaged over the long term history of the SMA capital for
each case, joint and disaggregated business models. Then, according to a long term analysis of these
stylized Poisson-Lognormal models, the following expressions can be used to determine the point at
which the SMA capital would be super additive. If it is super additive in the long term, it would indicated
that there is therefore an advantage to split the institution in the long-run into disaggregated separate
components. Furthermore, the expression provided allows one to maximize the long term SMA capital
reduction that can be obtained under such a partitioning of the institution.
Note: the following calculations are based on truncated Poisson expected loss expressions for
Lognormal models, see details in [Peters and Shevchenko, 2015].
Joint Institution (Long term behavior):
Long Term Average Loss Component (LTALC) is given by
J
JJ
JJJ
J
JJ
JJJJJJJJJ
H
L
LTALC
ln
2
1
exp5
ln
2
1
exp7
2
1
exp7),,(
2
2
2
22
Where
)(
denotes the Normal CDF, L is Euro 10 million and H is Euro 100 million.
14
Therefore the Long term ILM (LTILM) is given by
J
JJJ
JJJ BIC
LTALC
LTIM ),,(
1)1exp(ln),,(
These are then used to calculate the Long Term SMA (LTSMA) which will be an explicit function of
LDA model parameters (λJJ,σJ) according to
5-2 Buckets if ),,,(Mln)110(Mln110
1Bucket if ,
),,(
JJJJ
J
JJJ LTILMBIC
BIC
LTSMA
Disaggregated Institution {i=1 or i=2} (Long term behavior):
Long Term Average Loss Component (LTALC) is given by
i
ii
iii
i
ii
iiiiiiiii
H
L
LTALC
ln
2
1
exp5
ln
2
1
exp7
2
1
exp7),,(
2
2
2
22
where Φ(.) denotes the Normal CDF, L is Euro 10 million and H is Euro 100 million.
Therefore the Long term ILM (LTILM) is given by
i
iii
iii BIC
LTALC
LTIM ),,(
1)1exp(ln),,(
These are then used to calculate the Long Term SMA (LTSMA) which will be an explicit function of
LDA model parameters (λii,σi) according to
5-2 Buckets if ),,,(Mln)110(Mln110
1Bucket if ,
),,(
iiii
i
iii LTILMBIC
BIC
LTSMA
Hence, the SMA Super Additive Capital Condition becomes:
0),,(),,(),,( 222111
LTSMALTSMALTSMA JJJ
Using this stylized condition, banks may be able to determine for instance if in the long term it would
be economically efficient to split their institution into 2 or more separate entities. Furthermore, they
can use this expression to optimize the capital reduction for each of the individual entities, relative to
the combined entities SMA capital. Hence, what we show here is the long term average behavior which
will be the long run optimal conditions for split or merge.
We also note that due to the volatility of the loss experience, and the BI figures over time it could also
be the case that a given institution could have an SMA capital which would be switching between super
vs sub-additivity of the capital over time. This would imply that SMA model could provide a time varying
incentive and disincentive in merge and split depending on the current environment for capital funding.
15
Proposition: a Standardization of AMA
In the following section the recommendations made are based on detailed discussions proposed in
[Cruz, Peters, Shevchenko, 2015], [Peters and Shevchenko, 2015] and the preprint [Peters et al, 2016].
SMA cannot be considered as an alternative to AMA models. We suggest that AMA should not be
discarded, but instead could be improved by addressing its current weaknesses. It should be
standardized! Details of how a rigorous and statistically robust standardization can start to be
considered, with practical considerations, are provided below.
Rather than discarding all Operational risk modelling as allowed under the AMA, instead the regulator
could make a proposal to standardize the approaches to modelling based on the accumulated
knowledge to date of Operational risk modelling practice.
We propose one class of models that can act in this manner and allow one to incorporate the key
features offered by AMA LDA type models which involve internal data, external data, BEICF's and
Scenarios, with other important information on factors that the SMA method and OpCar approaches
have tried to achieve but failed. As noted in this response, one issue with the SMA and OpCar approaches
is that they try to model all Operational risk processes at the institution or group level with a single LDA
model and simplistic regression structure, this is bound to be problematic due to the very nature and
heterogeneity of Operational risk loss processes. In addition it fails to allow for incorporation of many
important Operational risk loss process explanatory information sources such as BEICF's which are
often no longer informative or appropriate to incorporate at institution level, compared to individual
Business Line/Event Type (BL/ET) level.
We propose a standardization of the AMA internal models to remove the wide range of heterogeneity
in model type. Our recommendation involves a bottom up modeling approach where for each BL/ET
Operational risk loss process we model the severity and frequency components in an LDA structure
which is comprised of a hybrid LDA model with factor regression components, allowing to include the
factors driving operational risks in the financial industry, at a sufficient level of granularity, while
utilizing a class of models known as the Generalized Additive Models for Location, Shape and Scale
(GAMLSS) in the severity and frequency aspects of the LDA framework. The class of GAMLSS models
can be specified to make sure that the severity and frequency families are comparable across
institutions, allowing both risk-sensitivity and capital comparability. We recommend in this regard
Poisson and Generalized Gamma classes for the family of frequency and severity model as these capture
all typical ranges of loss model used in practice over the last 15 years in Operational risk, including
Gamma, Weibull, Lognormal, Pareto type severities.
Standardizing Recommendation 1:
This leads us to the first standardizing recommendation relating to the level of granularity of modelling
in Operational risk. The level of granularity of the modelling procedure is important to consider when
incorporating different sources of Operational risk data such as BEICF's and scenarios and this debate
has been going for the last 10 years, with discussion on bottom-up versus top-down based Operational
risk modelling, see overview in [Cruz, Peters, Shevchenko, 2015] and [Peters and Shevchenko, 2015]. We
advocate that a bottom-up based approach be recommended as the standard modelling structure as it
will allow for greater understanding and more appropriate model development of the actual loss
processes under study. Therefore, we argue that sticking with the 56 BL/ET structure of Basel II is in
our opinion best for a standardizing framework with a standard aggregation procedure to institution
level / group level. We argue that alternatives such as the SMA and OpCar approaches that are trying to
model multiple different featured loss processes combined into one loss process at the institution level
will be bound to fail as they need to capture high frequency events, as well as high severity events, this
in principle is very difficult if not impossible to capture with a single LDA model at institution level and
16
should be avoided. Furthermore, such a bottom-up approach allows for greater model interpretation
and incorporation of Operational risk loss data such as BEICF's.
Standardizing Recommendation 2:
This brings us to our second recommendation for standardization in Operational risk modelling.
Namely, we propose to standardize the modelling class to remove the wide range of heterogeneity in
model type. We propose a standardization that involves a bottom up modelling approach where each
BL/ET level of Operational risk loss process we model the severity and frequency components in an
LDA structure which is comprised of a hybrid LDA model with factor regression components. The way
to achieve this is to utilise a class of GAMLSS regression models for the severity and frequency model
calibrations.
That is two GAMLSS regression models are developed, one for the severity fitting and the other for the
frequency fitting. This family of models is flexible enough in our opinion to capture any type of
frequency or severity model that may be observed in practice in Operational risk data whilst
incorporating factors such as BEICF's (Key Risk Indicators, Key Performance Indicators, Key Control
Indicators) naturally into the regression structure. This produces a class of hybrid factor regression
models in an Operational risk LDA family of models that can easily be fit, simulated from and utilised in
Operational risk modelling to aggregate to the institution level. Furthermore, as more data years of
history become available, the incorporation of time-series structure in the severity and frequency
aspects of each loss process modelling can be naturally incorporated in a GAMLSS regression LDA
framework.
Standardizing Recommendation 3:
The class of models considered for the conditional response in the GAMLSS severity model can be
standardized. There are several possible examples of such models that may be appropriate [Chavez-
Demoulin, 2015] and [Ganegoda and Evans, 2013], however we advocate for the severity models that the
class of models be restricted in regulation to one family, the Generalized Gamma family of models, see
details in [Peters et al, 2016] where these models are developed in an LDA hybrid factor GAMLSS model.
This work shows that such models are appropriate for Operational risk as they admit special members
which correspond to the LogNormal, Pareto, Weibull and Gamma. All of these models are popular
Operational risk severity models used in practice and represent the range of best practice by AMA banks
as observed in the recent survey [BCBS160b]. Since the Generalized Gamma family contains all these
models as special sub-cases it means that banks would only have to ever fit one class of severity model
to each BL/ET LDA severity profile, and the most appropriate family member would be resolved in the
fitting through the estimation of the shape and scale parameters, in such a manner that if a LogNormal
model was appropriate it would be selected, where as if a Gamma model were more appropriate it
would also be selected from one single fitting procedure.
Furthermore, the frequency model could be standardized as a Poisson GAMLSS regression structure as
the addition of explanatory covariates and time varying and possible stochastic intensity allow for a
flexible enough frequency model for all types of Operational risk loss process.
Standardizing Recommendation 4:
The fitting of these models should be performed in a regression based manner in the GAMLSS
framework, which incorporates truncation and censoring in a penalized maximum likelihood
framework, see [Stasinopoulos and Rigby, 2007]. We believe by standardizing the fitting procedure to
one that is statistically rigorous, well understood in terms of the estimator properties and robust when
incorporating a censored likelihood appropriately, will remove the range of heuristic practices that has
arisen in fitting models in Operational risk. The penalized regression framework, based on L1
parameter penalty will also allow for shrinkage methods to be used to select most appropriate
17
explanatory variables in the GAMLSS severity and frequency regression structures.
Standardizing Recommendation 5:
The standardization in form of Bayesian versus Frequentist type models be left to the discretion of the
bank to decide which version is best for their practice. However, we note that under a Bayesian
formulation, one can adequately incorporate multiple sources of information including expert opinion
and scenario based data, see discussions in [Cruz, Peters and Shevchenko, 2015] and [Peters, Shevchenko
and Wuthrich, 2009] and [Shevchenko, 2011].
Standardizing Recommendation 6:
The sets of BEICF's and factors to be incorporated into each BL/ET LDA factor regression model for
severity and frequency should be specified by the regulator. There should be a core set of factors to be
incorporated by all banks which include BEICF's and other factors to be selected. The following types
of KRI's categories can be considered in developing the core family of factors (see [Chapelle, 2013]):
Exposure Indicators: any significant change in the nature of the business environment and in its exposure
to critical stakeholders or critical resources. Flag any change in the risk exposure.
Stress Indicators: any significant rise in the use of resources by the business, whether human or material.
Flag any risk rising from overloaded humans or machines.
Causal Indicators: metrics capturing the drivers of key risks to the business. The core of preventive KRIs.
Failure Indicators: poor performance and failing controls are strong risk drivers. Failed KPIs and KCIs.
In this approach, a key difference is that instead of fixing the regression coefficients for all banks (as is
the case for SMA and OpCar) pretending that all banks have the same regression relationship as the
entire banking population, instead one should standardize the class of factors. Specify explicitly how
they should be collected, the frequency and then specify that they should be incorporated in the
GAMLSS regression. This will allow each bank to then calibrate the regression model to their loss
experience through a rigorous penalized Maximum Likelihood procedure. With strict criterion on cross
validation based testing on the amount of penalization admitted in the regression when shrinking
factors out of the model. This approach has the advantage that banks will not only start to better
incorporate in a structured and statistically rigorous manner the BEICF information into Operational
risk models, but they will be forced to better collect and consider such factors in a principled manner.
References:
Chapelle, 2013: “The Importance Preventive KRIs”, Chapelle, A., Operational Risk & Regulation, April,
2013.
BCBSd355,2016: “Standardised Measurement Approach for operational risk”, Basel Committee on
Banking Supervision, Consultative Document, http://www.bis.org/bcbs/publ/d355.pdf, March 2016.
BCBSd291,2014: “Operational risk Revisions to the simpler approaches”, Basel Committee on Banking
Supervision, Consultative Document, http://www.bis.org/publ/bcbs291.pdf, October 2014.
BCBS160b: “Observed range of practice in key elements of Advanced Measurement Approaches (AMA)”,
Basel Committee on Banking Supervision, July 2009. http://www.bis.org/publ/bcbs160b.pdf
18
Chapelle et.al. , 2008: “Practical methods for measuring and managing operational risk in the financial
sector: A clinical study.” Chapelle A, Crama Y, Hübner G, Peters JP., Journal of Banking & Finance. 2008
Jun 30;32(6):1049-61.
Cruz, Peters, Shevchenko, 2015: “Fundamental Aspects of Operational Risk and Insurance Analytics: A
Handbook of Operational Risk.” Cruz M., Peters G.W. and Shevchenko P.V., John Wiley & Sons; 2015.
Chavez-Demoulin ,2015: “An extreme value approach for modeling operational risk losses depending
on covariates.” Chavez-Demoulin, V., Embrechts, P., & Hofert, M., Journal of Risk and Insurance, DOI:
10.1111/jori.12059, 2015.
Ganegoda and Evans, 2013: A scaling model for severity of operational losses using generalized
additive models for location scale and shape (GAMLSS).Ganegoda, A., & Evans, J. Annals of Actuarial
Science, 7 (1), 61100, 2013.
Guégan and Hassani, 2013: “Using a time series approach to correct serial correlation in operational
risk capital calculation.” Guégan D, Hassani B., The Journal of Operational Risk. May;8(3):31-56, 2013.
Peters et al. 2016: “Should AMA be Replaced with SMA for Operational Risk?Peters G.W., Shevchenko
P.V., Hassani B. and Chapelle A., Available at SSRN: http://ssrn.com/abstract=2788920, 2016.
Peters and Shevchenko, 2015: “Advances in Heavy Tailed Risk Modeling: A Handbook of Operational
Risk.” Peters G.W. and Shevchenko P.V., John Wiley & Sons; 2015.
Peters, Shevchenko and Wuthrich, 2009: Peters GW, Shevchenko PV, Wuthrich MV. Dynamic operational
risk: modeling dependence and combining different sources of information. The Journal of Operational
Risk. Apr 11;4(2):69-104, 2009.
Shevchenko, 2011: Modelling operational risk using Bayesian inference. Shevchenko PV., Springer
Science & Business Media; 2011.
Stasinopoulos and Rigby, 2007: “Generalized additive models for location scale and shape (GAMLSS) in
R. Stasinopoulos DM, Rigby RA. , Journal of Statistical Software. Dec 31;23(7):1-46, 2007.
... While the definition of BEICF differs among jurisdictions and in many cases is specific to individual organizations, these factors must be risk sensitive; provide management with information on the institution's risk profile; represent meaningful drivers of exposure which can be quantified; and be used across the entire institution. 3 Incorporating BEICF into Operational Risk Modelling is a reflection of the modelling assumption that operational risk can be viewed as a function for controlling the environment. ...
Chapter
Full-text available
Banks must establish an independent Operational Risk Management function aimed at defining policies, procedures and methodologies for identifying, measuring, monitoring and controlling operational risks. In this perspective, this chapter analyses (a) the regulatory framework on the operational capital requirement; (b) the regulatory view on Operational Risk Management; and (c) the new Supervisory Review and Evaluation Process (SREP) in relation to operational risk. The chapter also attempts to propose an integrated approach able to defining, managing, monitoring and reporting operational losses together with capital planning, ICAAP (Internal Capital Adequacy Assessment Process), RAF (Risk Appetite Framework) and risk culture of financial intermediaries also in accordance with the new SREP perspective.
Article
Full-text available
Research aims: Risk management in financial institutions struggles with setting suitable capital charges for operational losses, resulting in large, disproportionate reserves that impact profits. This study, therefore, aims to develop a tailored operational risk measurement model for general takaful companies, addressing this challenge and optimizing capital allocation.Design/Methodology/Approach: This study employed a hybrid approach, merging the loss distribution approach (LDA) with historical data and scenario analysis for insurance company loss events. Compiling data into distributions, it utilized Monte Carlo simulations to determine value at risk (VaR). The resulting VaR guided the calculation of operational risk capital charges for future periods.Research findings: Measurement using the hybrid method could produce more adequate operational risk capital charges. These results confirm the acceptability of the VaR calculation and have been validated by the Kupic test.Theoretical contribution/Originality: This research offers a more comprehensive alternative method of measuring operational risk by combining historical company data with expert opinions, making it more likely to be practiced in the industry.Practitioner/Policy implication: The results of this study put forward an alternative, more suitable model for industry and regulators to measure operational risk management in general takaful companies.
Article
Full-text available
Recently, Basel Committee for Banking Supervision proposed to replace all approaches, including Advanced Measurement Approach (AMA), for operational risk capital with a simple formula referred to as the Standardised Measurement Approach (SMA). This paper discusses and studies the weaknesses and pitfalls of SMA such as instability, risk insensitivity, super-additivity and the implicit relationship between SMA capital model and systemic risk in the banking sector. We also discuss the issues with closely related operational risk Capital-at-Risk (OpCar) Basel Committee proposed model which is the precursor to the SMA. In conclusion, we advocate to maintain the AMA internal model framework and suggest as an alternative a number of standardization recommendations that could be considered to unify internal modelling of operational risk. The findings and views presented in this paper have been discussed with and supported by many OpRisk practitioners and academics in Australia, Europe, UK and USA, and recently at OpRisk Europe 2016 conference in London.
Article
Full-text available
GAMLSS is a general framework for fitting regression type models where the distribution of the response variable does not have to belong to the exponential family and includes highly skew and kurtotic continuous and discrete distribution. GAMLSS allows all the parameters of the distribution of the response variable to be modelled as linear/non-linear or smooth functions of the explanatory variables. This paper starts by defining the statistical framework of GAMLSS, then describes the current implementation of GAMLSS in R and finally gives four different data examples to demonstrate how GAMLSS can be used for statistical modelling.
Article
Full-text available
In this paper, we model dependence between operational risks by allowing risk profiles to evolve stochastically in time and to be dependent. This allows for a flexible correlation structure where the dependence between frequencies of different risk categories and between severities of different risk categories as well as within risk categories can be modeled. The model is estimated using Bayesian inference methodology, allowing for combination of internal data, external data and expert opinion in the estimation procedure. We use a specialized Markov chain Monte Carlo simulation methodology known as Slice sampling to obtain samples from the resulting posterior distribution and estimate the model parameters.
Book
A one-stop guide for the theories, applications, and statistical methodologies essential to operational risk. Providing a complete overview of operational risk modeling and relevant insurance analytics, Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk offers a systematic approach that covers the wide range of topics in this area. Written by a team of leading experts in the field, the handbook presents detailed coverage of the theories, applications, and models inherent in any discussion of the fundamentals of operational risk, with a primary focus on Basel II/III regulation, modeling dependence, estimation of risk models, and modeling the data elements. Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk begins with coverage on the four data elements used in operational risk framework as well as processing risk taxonomy. The book then goes further in-depth into the key topics in operational risk measurement and insurance, for example diverse methods to estimate frequency and severity models. Finally, the book ends with sections on specific topics, such as scenario analysis; multifactor modeling; and dependence modeling. A unique companion with Advances in Heavy Tailed Risk Modeling: A Handbook of Operational Risk, the handbook also features: Discussions on internal loss data and key risk indicators, which are both fundamental for developing a risk-sensitive framework; Guidelines for how operational risk can be inserted into a firm's strategic decisions; A model for stress tests of operational risk under the United States Comprehensive Capital Analysis and Review (CCAR) program.
Article
The advanced measurement approach requires financial institutions to develop internal models to evaluate regulatory capital. Traditionally, the loss distribution approach (LDA) is used, mixing frequencies and severities to build a loss distribution function (LDF). This distribution represents annual losses; consequently, the 99.9th percentile of the distribution providing the capital charge denotes the worst year in a thousand. The traditional approach approved by the regulator and implemented by financial institutions assumes the losses are independent. This paper proposes a solution to address the issues arising when autocorrelations are detected between the losses, by using time series. Thus, the losses are aggregated periodically and several models are adjusted using autoregressive models, autoregressive fractionally integrated and Gegenbauer processes considering various distributions fitted on the residuals. Monte Carlo simulation enables the construction of the LDF, and the computation of the relevant risk measures. These dynamic approaches are compared with static traditional methodologies in order to show their impact on the capital charges, using several data sets. The construction of the related LDFs and the computation of the capital charges permit complying with the regulation. Besides, capturing simultaneously autocorrelation phenomena and large losses by fitting adequate distributions on the residuals, provide an alternative to the arbitrary selection of the LDA.
Article
A general methodology for modeling loss data depending on covariates is developed. The parameters of the frequency and severity distributions of the losses may depend on covariates. The loss frequency over time is modeled with a nonhomogeneous Poisson process with rate function depending on the covariates. This corresponds to a generalized additive model, which can be estimated with spline smoothing via penalized maximum likelihood estimation. The loss severity over time is modeled with a nonstationary generalized Pareto distribution (alternatively, a generalized extreme value distribution) depending on the covariates. Since spline smoothing cannot directly be applied in this case, an efficient algorithm based on orthogonal parameters is suggested. The methodology is applied both to simulated loss data and a database of operational risk losses collected from public media. Estimates, including confidence intervals, for risk measures such as Value-at-Risk as required by the Basel II/III framework are computed. Furthermore, an implementation of the statistical methodology in R is provided.
Book
A cutting-edge guide for the theories, applications, and statistical methodologies essential to heavy tailed risk modeling. Focusing on the quantitative aspects of heavy tailed loss processes in operational risk and relevant insurance analytics, Advances in Heavy Tailed Risk Modeling: A Handbook of Operational Risk presents comprehensive coverage of the latest research on the theories and applications in risk measurement and modeling techniques. Featuring a unique balance of mathematical and statistical perspectives, the handbook begins by introducing the motivation for heavy tailed risk processes in high consequence low frequency loss modeling. Clear coverage on advanced topics such as splice loss models, extreme value theory, heavy tailed closed form loss distributional approach models, flexible heavy tailed risk models, risk measures, and higher order asymptotic approximations of risk measures for capital estimation. An exploration of the characterization and estimation of risk and insurance modelling, which includes sub-exponential models, alpha-stable models, and tempered alpha stable models. An extended discussion of the core concepts of risk measurement and capital estimation as well as the details on numerical approaches to evaluation of heavy tailed loss process model capital estimates. Numerous detailed examples of real-world methods and practices of operational risk modeling used by both financial and non-financial institutions. Advances in Heavy Tailed Risk Modeling: A Handbook of Operational Risk is an excellent reference for risk management practitioners, quantitative analysts, financial engineers, and risk managers. The book is also a useful handbook for graduate-level courses on heavy tailed processes, advanced risk management, and actuarial science.
Book
The management of operational risk in the banking industry has undergone explosive changes over the last decade due to substantial changes in the operational environment. Globalization, deregulation, the use of complex financial products, and changes in information technology have resulted in exposure to new risks which are very different from market and credit risks. In response, the Basel Committee on Banking Supervision has developed a new regulatory framework for capital measurement and standards for the banking sector. This has formally defined operational risk and introduced corresponding capital requirements. Many banks are undertaking quantitative modelling of operational risk using the Loss Distribution Approach (LDA) based on statistical quantification of the frequency and severity of operational risk losses. There are a number of unresolved methodological challenges in the LDA implementation. Overall, the area of quantitative operational risk is very new and different methods are under hot debate. This book is devoted to quantitative issues in LDA. In particular, the use of Bayesian inference is the main focus. Though it is very new in this area, the Bayesian approach is well suited for modeling operational risk, as it allows for a consistent and convenient statistical framework for quantifying the uncertainties involved. It also allows for the combination of expert opinion with historical internal and external data in estimation procedures. These are critical, especially for low-frequency/high-impact operational risks. This book is aimed at practitioners in risk management, academic researchers in financial mathematics, banking industry regulators and advanced graduate students in the area. It is a must-read for anyone who works, teaches or does research in the area of financial risk.
Article
This paper analyzes the implications of the advanced measurement approach (AMA) for the assessment of operational risk. Through a clinical case study on a matrix of two selected business lines and two event types of a large financial institution, we develop a procedure that addresses the major issues faced by banks in the implementation of the AMA. For each cell, we calibrate two truncated distributions functions, one for “normal” losses and the other for the “extreme” losses. In addition, we propose a method to include external data in the framework. We then estimate the impact of operational risk management on bank profitability, through an adapted measure of RAROC. The results suggest that substantial savings can be achieved through active management techniques.
A scaling model for severity of operational losses using generalized additive models for location scale and shape (GAMLSS)
  • Ganegoda