Article

Deviation Measures in Risk Analysis and Optimization

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

General deviation measures, which include standard deviation as a special case but need not be symmetric with respect to ups and downs, are defined and shown to correspond to risk measures in the sense of Artzner, Delbaen, Eber and Heath when those are applied to the difference between a random variable and its expectation, instead of to the random variable itself. A property called expectation-boundedness of the risk measure is uncovered as essential for this correspondence. It is shown to be satisfied by conditional value-at-risk and by worst-case risk, as well as various mixtures, although not by ordinary value-at-risk. Interpretations are developed in which inequalities that are "acceptably sure", relative to a designated acceptance set, replace inequalities that are "almost sure" in the usual sense being violated only with probability zero. Acceptably sure inequalities fix the standard for a particular choice of a deviation measure. This is explored in examples that rely on duality with an associated risk envelope, comprised of alternative probability densities. The role of deviation measures and risk measures in optimization is analyzed, and the possible influence of "acceptably free lunches" is thereby brought out. Optimality conditions based on concepts of convex analysis, but relying on the special features of risk envelopes, are derived in support of a variety of potential applications, such as portfolio optimization and variants of linear regression in statistics. measures, value-at-risk, conditional value-at-risk, portfolio optimization, convex analysis

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Further insight stems from [57], which expresses a superquantile of a continuous random variable as the worst-case expectation over a family of probability distributions. The situation for general distributions is hinted to in [57], but brought out more clearly in [252]. In summary, the flurry of activity around the turn of the century produced the following equivalent formulas for an α-superquantile. ...
... Theorem 7 in [243] establishes the argmin formula (3.5). A proof of the last expression (3.6) appears in [252]; see also our discussion in Section 6. ...
... In addition to producing new measures of risk, second-order superquantiles also underpin the argmin-formula for superquantiles (3.5); see [246,143]. Properties of mixed superquantile risk measures are traced back to [2,252,253]; here we summarize key facts as given in [246]. ...
Preprint
Full-text available
Uncertainty is prevalent in engineering design, statistical learning, and decision making broadly. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measure of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From its beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.
... In 2002, Rockafellar, Uryasev and Zabarankin [12,16] introduced general deviation measures -a broad class of functionals that contains standard deviation, standard semi-deviation, MAD and CVaR-deviation as special cases, and study portfolio optimization problem in this general setting [13]. This provides investors with flexibility when choosing which deviation measure is the best for modelling their individual risk preferences. ...
... As shown in [12] and [16], there is a one-to-one correspondence between deviation measures and risk envelopes given by the formulas ...
... v * , y * , u * , π * is a feasible solution to (16). Hence, if v , y , u , π is an optimal solution to (16), then v ≥ v * . ...
Article
Full-text available
We derive a linear program for minimization, subject to a linear constraint, of an arbitrary positively homogeneous convex functional, whose dual set is given by linear inequalities, possibly involving auxiliary variables. This allows to reduce to linear programming individual and cooperative portfolio optimization problems with arbitrary deviation measures whose risk envelopes are given by a finite number of linear constraints. Earlier, such linear programs were known only for individual porfolio optimization problems with special examples of deviation measures, such as mean absolute deviation or CVaR deviation.
... One direction of research associated with an axiomatic approach, was initiated in Artzner, Delbaen, Eber and Heath [1], where the concept of coherent risk measures was introduced. Subsequently, this approach was developed by Föllmer and Schied [4], Rockafellar, Uryasev and Zabarankin [11], and Ruszczyński and Shapiro [13]. In the discussion below we follow the general setting and terminology of [13]. ...
... If we introduce a space X of measurable functions on Ω, we can talk of a risk function as a mapping ρ : X → R (we can also consider risk functions with values in the extended real line). In our earlier work [13] we have refined and extended the analysis of [1,4,11] and we have derived, from a handful of axioms, fairy general properties of risk functions and of optimization problems involving such functions. ...
... We view ω → A(ω) as a multifunction from Ω into the set P Y 2 of probability measures, on (Ω, F 2 ), which are included in Y 2 . Formula (3.8) extends to conditional risk mappings the risk envelope representation derived in [1,11,13]. ...
Book
We introduce an axiomatic definition of a conditional convex risk mapping and we derive its properties. In particular, we prove a representation theorem for conditional risk mappings in terms of conditional expectations. We also develop dynamic programming relations for multistage optimization problems involving conditional risk mappings.
... Deviation risk measures form a separate class of functionals applied to the difference of a random variable to its mean value to achieve this task. The deviation risk measures satisfy all four properties of coherent risk measure established for the deviation type of risk measures [20]. The orientation of applying deviation risk measure is to penalize the extent by which the random variable R(w) drops below its mean value E(R(w)). ...
... The standard deviation σ (R(w)) = √ E(R(w) − E(R(w))) 2 and mean-absolute deviation E|R(w) − E(R(w))| are examples of such deviation risk measures. Deviation risk measure of a portfolio return R(w) can be obtained by replacing it with R(w) − E(R(w)) in the expectation-bounded risk measures 3 ℜ(R(w)) i. e., D(R(w)) = ℜ(R(w) − E(R(w))) [20]. ...
... Standard deviation and mean-absolute deviation are symmetric deviation risk measures whereas CVaR △ α (−R(w)) is an asymmetric deviation risk measure. For more insight and properties of deviation risk measures, one can refer to [20,21]. ...
Article
We study an extension of value-at-risk (VaR) measure, named as Mixed VaR, a weighted sum of multiple VaRs quantified at different confidence levels. Classical VaR or single VaR computed at a fixed confidence level corresponds to a single percentile of distribution and therefore, is unable to reveal much information of risk involved in it. As a remedy to this, we propose to investigate the role of Mixed VaR and its deviation version in risk management of extreme events in the portfolio selection problem. We analyze the computational performance of portfolios from optimization models minimizing single VaR and Mixed VaR (and their deviation variants) for different combinations of confidence levels over historical as well as on simulated data in various financial performance parameters including mean value, risk measures (VaR and CVaR values quantified at multiple confidence levels), and risk-reward measures (Sharpe ratio, Sortino ratio, Sharpe with VaR, and Sharpe with CVaR). We also study the numerical comparison between Mixed VaR with its most crucial counterpart, the Mixed conditional value-at-risk (Mixed CVaR). We find that the performance of portfolios from Mixed VaR model is lying between the performance of its single VaR counterparts and rarely yield any worst value in any of the considered performance parameters. A similar observation is concluded for the deviation version of the models. Further, we find that Mixed VaR outperforms Mixed CVaR with respect to risk-reward measures considered in the study on both of the data sets, historical as well as on simulated.
... The proposed formulation includes the deviation measure in SSD criterion to construct an optimal portfolio. The deviation measures form a separate class of functionals satisfying a specific set of axioms and having one to one correspondence with the class of expectation-bounded risk measures (Rockafellar et al., 2002). A distinguishing feature of our approach is to replace the random return R x of the portfolio x and return I of the benchmark portfolio by their return deviations from the respective mean values thereby using R x − E (R x ) and I − E (I ) in the SSD criterion. ...
... The deviation measures form a class of functionals satisfying all four properties of coherent risk measure established for the deviation type of risk measures. We recall the definition of deviation measure from Rockafellar et al. (2002) as follows: ...
... defined on a subspace L of all random variable X , is called a deviation risk measure if it satisfies the following four properties: (Rockafellar et al., 2002). The deviation associated with VaR δ (X ) (the value-at-risk at δ confidence level) 2 is denoted by VaR δ (X ), and defined by VaR δ (X − E (X )). ...
Article
Deviation measures form a separate class of functionals applied to the difference of a random variable to its mean value. In this paper, we aim to introduce a deviation measure in the second‐order stochastic dominance (SSD) criterion to select an optimal portfolio having a higher utility of deviation from its mean value than that of the benchmark portfolio. A new strategy, called deviation SSD (DSSD), is proposed in portfolio optimization. Performance of the proposed model in an application to enhanced indexing is evaluated and compared to other two standard portfolio optimization models that employ SSD criterion on lower partial moments of order one (LSSD) and tail risk measure (TSSD). We use historical data of 16 global indices to assess the performance of the proposed model. In addition to this, we solved the models on simulated data, capturing the joint dependence between indices using historical data of two data sets, each data set comprising six indices. The simulated data are generated by first fitting the autoregressive moving average–Glosten–Jagannathan–Runkle–generalized autoregressive conditional heteroscedastic model on historical data to determine marginal distributions, and thereafter capture the dependence structure using the best fitted regular vine copula. The portfolios from DSSD model achieve higher excess mean return than TSSD model and lower variance, downside deviation, and conditional value‐at‐risk than LSSD model. Also, portfolios from DSSD model demonstrate higher values of information ratio, stable tail‐adjusted return ratio and Sortino ratio than the other two models. Lastly, DSSD model is observed to produce well‐diversified portfolios compared to LSSD model.
... Recently, when applying optimization models to practical scenarios, we have encountered problems constructed on ordered cones with empty interiors, for instance, economic models [20][21][22]. Let us present a brief overview of results studying the properties of solutions to optimization models involving nonsolid cones. ...
... Fortunately, in many practical situations of economic models, the semicontinuity and continuity properties of solution maps are sufficient. This explains why the semicontinuity conditions for optimization models have attracted the attention of many researchers recently, see, e.g., [20][21][22][33][34][35][36][37]. For instance, in [35], the authors explored sufficient conditions for the Hausdorff continuity of approximate solutions for two models related to weakly set-valued equilibrium problems, employing linear scalarization techniques for sets and leveraging the concavity of the objective maps. ...
Article
Full-text available
This paper aims to study the stability in the sense of Hausdorff continuity of solution maps to equilibrium problems without assuming the solid condition of ordered cones. We first propose a generalized concavity of set-valued maps and discuss its relation with the existing concepts. Then, by using the above property and the continuity of the objective function, sufficient conditions for the Hausdorff continuity of solution maps to scalar equilibrium problems are established. Finally, we utilize the oriented distance function to obtain the Hausdorff continuity of solution maps to set-valued equilibrium problems via the corresponding results of the scalar equilibrium problems.
... Coherent risk measures have some interesting properties. In particular, a risk measure is coherent if and only if, there exists a risk envelope U such that [48,58,59]. A risk envelope is a nonempty convex subset of P that is closed and where P : ...
... Moreover the risk envelope associated to CVaR a , can be written as U ¼ fd 2 P j E p ½d ¼ 1, 0 d 1 a g [59,63]. This rigorous definition may not be intuitive, but roughly speaking, CVaR a is the expected value of X given the upper a-tail of its conditional distribution, representing the ð1 À aÞ worst-case scenarios. ...
Article
Full-text available
Offline reinforcement learning (RL) has emerged as a promising paradigm for real-world applications since it aims to train policies directly from datasets of past interactions with the environment. The past few years, algorithms have been introduced to learn from high-dimensional observational states in offline settings. The general idea of these methods is to encode the environment into a latent space and train policies on top of this smaller representation. In this paper, we extend this general method to stochastic environments (i.e., where the reward function is stochastic) and consider a risk measure instead of the classical expected return. First, we show that, under some assumptions, it is equivalent to minimizing a risk measure in the latent space and in the natural space. Based on this result, we present Latent Offline Distributional Actor-Critic (LODAC), an algorithm which is able to train policies in high-dimensional stochastic and offline settings to minimize a given risk measure. Empirically, we show that using LODAC to minimize Conditional Value-at-Risk (CVaR) outperforms previous methods in terms of CVaR and return on stochastic environments.
... where R b , b ∈ {f, c 1 , ..., c M } are arbitrary statistics of the QoI, e.g. the aforementioned expected value or standard deviation. We will discuss the measures R b that we use in this work in more detail in Section 2.1, but we also refer to the rich literature on robustness, reliability, risk and deviation measures [2,[33][34][35][36][37][38][39][40][41]. The method SNOWPAC that we use to solve Eq. (2) is afterwards presented in Section 2.2. ...
... The following sections introduce the sampling estimators for the robustness and reliability measures that we consider in this work. They are, e.g., given in [2] and we refer to [39,42] for a detailed discussion of risk assessment strategies and an introduction to a wider class of measures. ...
Preprint
Full-text available
Optimization is a key tool for scientific and engineering applications, however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations. The cost of OUU is proportional to the cost of performing a forward uncertainty analysis at each design location visited, which makes the computational burden too high for high-fidelity simulations with significant computational cost. From a high-level standpoint, an OUU workflow typically has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. In this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called Multilevel Monte Carlo (MLMC) method. MLMC has the potential of drastically reducing the computational cost by allocating resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our novel approach with respect to the single fidelity counterpart, based on standard Monte Carlo evaluation of statistics.
... While widely used as a coherent risk measure, the expectation fails to effectively account for events that lie at the tail of a given distribution. To circumvent this issue, Conditional Value at Risk (CVaR) [12], [24] is a popular means for improved risk assessment. For computational efficiency, the dual representation of a coherent risk measure [12,Eq. ...
... Markowitz, 1952;Sharpe, 1964;Modigliani and Modigliani, 1997), which expresses the amount of risk via the extent of possible fluctuations around the expected value. The standard deviation is a coherent risk measure according to Artzner et al. (1999; see also Rockafellar et al., 2002), and it meets the characteristics of homogeneity and position invariance, which makes it a suitable basis for a risk-adequate assessment using the method of imperfect replication (see Dorfleitner and Gleißner, 2018;Dorfleitner, 2022). Furthermore, we use this risk measure in the case study because it corresponds to the usual understanding of risk in a company and to investment valuation due to the m,σ-principle on which the CAPM is based. ...
Article
Full-text available
Purpose From the buying club’s perspective, the transfer of a player can be interpreted as an investment from which the club expects uncertain future benefits. This paper aims to develop a decision-oriented approach for the valuation of football players that could theoretically help clubs determine the subjective value of investing in a player to assess its potential economic advantage. Design/methodology/approach We build on a semi-investment-theoretical risk-value model and elaborate an approach that can be applied in imperfect markets under uncertainty. Furthermore, we illustrate the valuation process with a numerical example based on fictitious data. Due to this explicitly intended decision support, our approach differs fundamentally from a large part of the literature, which is empirically based and attempts to explain observable figures through various influencing factors. Findings We propose a semi-investment-theoretical valuation approach that is based on a two-step model, namely, a first valuation at the club level and a final calculation to determine the decision value for an individual player. In contrast to the previous literature, we do not rely on an econometric framework that attempts to explain observable past variables but rather present a general, forward-looking decision model that can support managers in their investment decisions. Originality/value This approach is the first to show managers how to make an economically rational investment decision by determining the maximum payable price. Nevertheless, there is no normative requirement for the decision-maker. The club will obviously have to supplement the calculus with nonfinancial objectives. Overall, our paper can constitute a first step toward decision-oriented player valuation and for theoretical comparison with practical investment decisions in football clubs, which obviously take into account other specific sports team decisions.
... We adopt conditional value-at-risk (CVaR), which is recently introduced to contingency planning in [10], as a tail risk assessment measure in our framework. Given a discrete random variable of the distinct value set K with associated probabilities {p k |k ∈ K} and risk costs {ξ k |k ∈ K}, the CVaR can be defined as follows [30]: ...
Preprint
Generating safe and non-conservative behaviors in dense, dynamic environments remains challenging for automated vehicles due to the stochastic nature of traffic participants' behaviors and their implicit interaction with the ego vehicle. This paper presents a novel planning framework, Multipolicy And Risk-aware Contingency planning (MARC), that systematically addresses these challenges by enhancing the multipolicy-based pipelines from both behavior and motion planning aspects. Specifically, MARC realizes a critical scenario set that reflects multiple possible futures conditioned on each semantic-level ego policy. Then, the generated policy-conditioned scenarios are further formulated into a tree-structured representation with a dynamic branchpoint based on the scene-level divergence. Moreover, to generate diverse driving maneuvers, we introduce risk-aware contingency planning, a bi-level optimization algorithm that simultaneously considers multiple future scenarios and user-defined risk tolerance levels. Owing to the more unified combination of behavior and motion planning layers, our framework achieves efficient decision-making and human-like driving maneuvers. Comprehensive experimental results demonstrate superior performance to other strong baselines in various environments.
... Examples for widely used risk measures are the Value-at-Risk (see Christoffersen [42], Duffie [54], and Pritsker [161]) and the Conditional-Value-at-Risk (or Expected Shortfall, see Acerbi, Tasche [3], [4], Pflug [158], and Rockafellar, Uryasev [166], [167]). Spectral risk measures, as special coherent risk measures, provide a connection with deviation measures like the (often used for risk measuring in financial industry) standard deviation (see Acerbi [1] and Rockafellar et al. [168], [169]). Hence, modern publications concentrate on other measures than the variance in general. ...
Thesis
Full-text available
Risk measures and acceptance sets are important subjects in research, which are based on the axiomatic framework introduced by Artzner et al. in 1999. In the current environment of increasing regulatory demands on financial institutions with respect to their capital and risk positions, optimal financial decisions are crucial. This thesis makes an important contribution to the determination of cost-minimal investment decisions for fulfilling regulatory restrictions. The focus lies on the analysis of properties of the corresponding risk measure for deriving cost-minimal solutions of the institutional vector optimization problem afterwards. Moreover, we characterize (weakly) efficient points of the acceptance set and derive relationships to the solutions of the mentioned optimization problem. By assumption of a very general financial market model, we reach wide applicability in theory and practice.
... Rockafellar et al in [9] use the notions of sureness valuations, expectation-bounded risk measures and general deviation measures instead of acceptability, risk capital and deviation risk functional respectively. Given a probability distribution of future wealth of a contributor, the value-at-risk (expected shortfall) at the confidence level α of the future wealth random variable, is a maximum wealth exceeded with probability 1-α. ...
... Another drawback is that the Black-Scholes model does not allow for large, sudden movements of the asset prices, and furthermore, induces a financial market which is complete in the sense that all risks can be perfectly hedged, which again might not be realistic. Since we only assume ρ to be law-invariant, translation or shift-invariant, and positively homogeneous, our results encompass most standard or alternative risk measures like Value at Risk, Average Value at Risk, variance, standard deviation, or generalized deviation measures, see Jorion [31], Föllmer and Schied [25], or Rockafeller et al. [41]. We emphasize that ρ does not need to be additive, sub-additive, convex, or continuous. ...
Preprint
Focusing on gains instead of terminal wealth, we consider an asset allocation problem to maximize time-consistently a mean-risk reward function with a general risk measure which is i) law-invariant, ii) cash- or shift-invariant, and iii) positively homogeneous, and possibly plugged into a general function. We model the market via a generalized version of the multi-dimensional Black-Scholes model using $\alpha$-stable L\'evy processes and give supplementary results for the classical Black-Scholes model. The optimal solution to this problem is a Nash subgame equilibrium given by the solution of an extended Hamilton-Jacobi-Bellman equation. Moreover, we show that the optimal solution is deterministic and unique under appropriate assumptions.
... Since ρ(Z + c) = ρ(Z) for all Z ∈ L p (Ω, R) and any constant c, the latter risk measures ρ are called deviation measures -see [35]. They are compatible with our framework for g(x) = x p . ...
Preprint
Full-text available
Modern portfolio theory has provided for decades the main framework for optimizing portfolios. Because of its sensitivity to small changes in input parameters, especially expected returns, the mean-variance framework proposed by Markowitz (1952) has however been challenged by new construction methods that are purely based on risk. Among risk-based methods, the most popular ones are Minimum Variance, Maximum Diversification, and Risk Budgeting (especially Equal Risk Contribution) portfolios. Despite some drawbacks, Risk Budgeting is particularly attracting because of its versatility: based on Euler's homogeneous function theorem, it can indeed be used with a wide range of risk measures. This paper presents sound mathematical results regarding the existence and the uniqueness of Risk Budgeting portfolios for a very wide spectrum of risk measures and shows that, for many of them, computing the weights of Risk Budgeting portfolios only requires a standard stochastic algorithm.
... The homogeneity condition means that risk must be proportional to portfolio exposure. R(w) is assumed to be positive if w = 0. Deviation risk measures [22] are good candidates if they verify the differentiability criterion. Volatility is a deviation measure. ...
Article
Full-text available
Risk parity is an approach to investing that aims to balance risk evenly across assets within a given universe. The aim of this study is to unify the most commonly-used approaches to risk parity within a single framework. Links between these approaches have been identified in the published literature. A key point in risk parity is being able to identify and control the contribution of each asset to the risk of the portfolio. With alpha risk parity, risk contributions are given by a closed-form formula. There is a form of antisymmetry—or self-duality—in alpha risk portfolios that lie between risk budgeting and minimum-risk portfolios. Techniques from information geometry play a key role in establishing these properties.
... where Φ(θ) ⊂ P Ω is the risk envelope, and we write Φ : Θ ⇒ P Ω as a correspondence to emphasize the dependence on θ. According to [31], the risk envelope is explicitly: ...
Preprint
Full-text available
We consider risk-sensitive Markov decision processes (MDPs), where the MDP model is influenced by a parameter which takes values in a compact metric space. We identify sufficient conditions under which small perturbations in the model parameters lead to small changes in the optimal value function and optimal policy. We further establish the robustness of the risk-sensitive optimal policies to modeling errors. Implications of the results for data-driven decision-making, decision-making with preference uncertainty, and systems with changing noise distributions are discussed.
... for some non-zero w ∈ R d and ρ > 0. Also this risk measure found application in portfolio theory [63]. ...
Preprint
We study first-order optimality conditions for constrained optimization in the Wasserstein space, whereby one seeks to minimize a real-valued function over the space of probability measures endowed with the Wasserstein distance. Our analysis combines recent insights on the geometry and the differential structure of the Wasserstein space with more classical calculus of variations. We show that simple rationales such as "setting the derivative to zero" and "gradients are aligned at optimality" carry over to the Wasserstein space. We deploy our tools to study and solve optimization problems in the setting of distributionally robust optimization and statistical inference. The generality of our methodology allows us to naturally deal with functionals, such as mean-variance, Kullback-Leibler divergence, and Wasserstein distance, which are traditionally difficult to study in a unified framework.
... The limit || ∞ is in itself a risk measure, belonging to the family of deviation risk measures [6], which we call Extreme Deviation (XD). It is the expected maximum absolute return of the de-meaned returns distribution, i.e. maximum absolute deviation around the mean. ...
Preprint
Full-text available
We look at optimal liability-driven portfolios in a family of fat-tailed and extremal risk measures, especially in the context of pension fund and insurance fixed cashflow liability profiles, but also those arising in derivatives books such as delta one books or options books in the presence of stochastic volatilities. In the extremal limit, we recover a new tail risk measure, Extreme Deviation (XD), an extremal risk measure significantly more sensitive to extremal returns than CVaR. Resulting optimal portfolios optimize the return per unit of XD, with portfolio weights consisting of a liability hedging contribution, and a risk contribution seeking to generate positive risk-adjusted return. The resulting allocations are analyzed qualitatively and quantitatively in a number of different limits.
... It is shown in [25], [26] that any coherent risk measure has a dual representation as the maximization of expectation over an probability ambiguity set, which often turns out more convenient in computation, i.e., ...
Preprint
Full-text available
Motion planning for autonomous robots and vehicles in presence of uncontrolled agents remains a challenging problem as the reactive behaviors of the uncontrolled agents must be considered. Since the uncontrolled agents usually demonstrate multimodal reactive behavior, the motion planner needs to solve a continuous motion planning problem under multimodal behaviors of the uncontrolled agents, which contains a discrete element. We propose a branch Model Predictive Control (MPC) framework that plans over feedback policies to leverage the reactive behavior of the uncontrolled agent. In particular, a scenario tree is constructed from a finite set of policies of the uncontrolled agent, and the branch MPC solves for a feedback policy in the form of a trajectory tree, which shares the same topology as the scenario tree. Moreover, coherent risk measures such as the Conditional Value at Risk (CVaR) are used as a tuning knob to adjust the tradeoff between performance and robustness. The proposed branch MPC framework is tested on an \textit{overtake and lane change} task and a \textit{merging} task for autonomous vehicles in simulation, and on the motion planning of an autonomous quadruped robot alongside an uncontrolled quadruped in experiments. The result demonstrates interesting human-like behaviors, achieving a balance between safety and performance.
... Depending on how risk is regarded, a parameter can be considered as a measure of deviation or of loss (Albrecht, 2003). In the simplest case the axiomatic system of Pedersen and Satchell (1998), with the extension by Rockafellar et al. (2002), can be used, so that a transformation into the coherent system of Artzner et al. (1999) is ensured. In recent years, additional demands on measures of risk have been establishedin particular comonotonic additivity, robustness and elicitabilityfor instance by Emmer et al. (2015). ...
Article
Purpose Scoring is a widely used, long-established, and universally applicable method of measuring risks, especially those that are difficult to quantify. Unfortunately, the scoring method is often misused in real estate practice and underestimated in academia. The purpose of this paper is to supplement the literature with general rules under which scoring systems should be designed and validated, so that they can become reliable risk instruments. Design/methodology/approach The paper combines the rules, or axioms, for coherent risk measures known from the literature with those for scoring instruments. The result is a system of rules that a risk scoring system should fulfil. The approach is theoretical, based on a literature survey and reasoning. Findings At first, the paper clarifies that a risk score should express the variation of a property’s yield and not of its quality, as it is often done in practice. Then the axioms for a coherent risk scoring are derived, e.g. the independence of the risk factors. Finally, the paper proposes procedures for valid and reliable risk scoring systems, e.g. the out-of-time validation. Practical implications Although it is a theoretical work, the paper also focuses on practical applicability. The findings are illustrated with examples of scoring systems. Originality/value Rules for risk measures and for scoring systems have been established long ago, but the combination is a first. In this way, the paper contributes to real estate risk research and risk management practice.
... Introduced as indexes to evaluate risks [48], max and mean denote the maximum deviation and standard deviation of EVA's R profit a obtained from test scenarios, respectively. Generally, the average profit of an EVA decreases slightly by reducing while the EVA enjoys a less fluctuating profit and alleviates risks. ...
Article
Full-text available
The flexible charging and discharging behaviours of electric vehicles (EVs) can provide a promising profit opportunity for EV owners or aggregators in deregulated market. However, a major barrier to overcome is the uncertain market prices. In this context, this study aims to establish a novel and robust model for EV aggregators to optimally participate in multiple types of electricity markets while considering the uncertainty of the prices in various markets. First, the electricity market framework and the obligation of an EV aggregator are introduced, and each EVaggregator is entitled to participate in day‐ahead, real‐time, and regulation service markets. To cope with forecasting errors of market prices and other uncertainties, a robust optimisation model is developed to schedule the charging/discharging of EVs in different markets. A stepwise bidding strategy is then proposed to formulate robust market bids, offering more flexibility in handling volatile market prices in actual electricity markets. After the clearing of multiple types of electricity markets, a real‐time dispatch model for EV aggregators is proposed for allocating the settlements and regulation orders to each EV. Finally, scenario‐based simulation is employed to demonstrate the effectiveness of the proposed bidding strategy using actual market data and regulation signals.
... that is the smallest α, such that there exists a particular k p ∈ K for which J(k p , U) ≤ which is a measure of risk usually applied in the financial sector (see [37]). In this spirit, when choosing k p , the relative error of the function J will be less than α p with probability p. ...
Article
Classical methods of calibration usually imply the minimisation of an objective function with respect to some control parameters. This function measures the error between some observations and the results obtained by a numerical model. In the presence of uncontrollable additional parameters that we model as random inputs, the objective function becomes a random variable, and notions of robustness have to be introduced for such an optimisation problem. In this paper, we are going to present how to take into account those uncertainties by defining the relative-regret. This quantity allow us to compare the value of the objective function to its best performance achievable given a realisation of the random additional parameters. By controlling this relative-regret using a probabilistic constraint, we can then define a new family of estimators, whose robustness with respect to the random inputs can be adjusted.
... However, Cherny's result only applies to strong ρ-arbitrage, whereas the main focus of our paper is ρ-arbitrage, which is the more important of the two concepts. 6 5 The working paper [45] also recognised the occurrence of ρ-arbitrage for a wide class of coherent risk measures, noting that minimising the risk subject to an inequality constraint on the expected return may fail to have a solution. They called this phenomenon an "acceptably free lunch". ...
Preprint
We revisit mean-risk portfolio selection in a one-period financial market where risk is quantified by a positively homogeneous risk measure $\rho$ on $L^1$. We first show that under mild assumptions, the set of optimal portfolios for a fixed return is nonempty and compact. However, unlike in classical mean-variance portfolio selection, it can happen that no efficient portfolios exist. We call this situation regulatory arbitrage, and prove that it cannot be excluded - unless $\rho$ is as conservative as the worst-case risk measure. After providing a primal characterisation, we focus our attention on coherent risk measures, and give a necessary and sufficient characterisation for regulatory arbitrage. We show that the presence or absence of regulatory arbitrage for $\rho$ is intimately linked to the interplay between the set of equivalent martingale measures (EMMs) for the discounted risky assets and the set of absolutely continuous measures in the dual representation of $\rho$. A special case of our result shows that the market does not admit regulatory arbitrage for Expected Shortfall at level $\alpha$ if and only if there exists an EMM $\mathbb{Q} \approx \mathbb{P}$ such that $\Vert \frac{\text{d}\mathbb{Q}}{\text{d}\mathbb{P}} \Vert_{\infty} < \frac{1}{\alpha}$.
... The reader is referred to [74,93,94] for work on this topic. The literature also includes various methodologies to address different types of drawdown-based metrics; see [95][96][97]. To illustrate the ideas above, we consider an example with a hypothetical account value V (k) ...
Thesis
Kelly Betting is a prescription for optimal resource allocation among a sequence of gambles which are typically repeated in an independent and identically distributed manner. Within this setting, the theory is aimed at maximizing the expected value of the logarithmic growth of wealth. Many papers in the existing literature indicate that such a maximization leads to a number of desirable properties. These include superior long-term growth of wealth, competitive optimality and a certain myopic property. This betting scheme has also been criticized as being too aggressive with respect to various risk metrics. To address this, many papers suggest ad-hoc ways for scaling down the bet size. In our first collection of results, we provide a new perspective on this aggressiveness issue. That is, we show that in some cases, the Kelly optimum may actually lead to bets which are too conservative rather than too aggressive. To make this more precise, we provide a result which we call the Restricted Betting Theorem. Subsequently, we point out some additional negatives of the Kelly-based theory by quantifying what difficulties are encountered with various approximations which are used in some of the literature. Throughout this dissertation, we emphasize the feedback control system point of view and the ramification of our results in the context of stock trading. Following the initial results above, we report on our research aimed at improving the existing Kelly-based theory. Our second collection of results, which we call Drawdown-Modulated Betting, is focused on mitigating the potentially large drawdown for a rather general class of betting schemes including the classical Kelly Betting scheme as special case. Motivated by the fact that this issue is of paramount concern from a risk management perspective, we prove a result, called the Drawdown Modulation Lemma, which characterizes investment strategies guaranteeing that the percentage drawdown is no greater than a prespecified level for all sequences of admissible returns. With the aid of this lemma, we show that investment functions can be expressed as a linear time-varying feedback control parameterized by a feedback gain and leading to satisfaction of the drawdown specification. Subsequently, a generalization of the lemma to the portfolio setting is also provided. In addition, with the risk-reward pair being drawdown and expected return, we prove that the drawdown-modulated feedback strategy “dominates” the classical linear time-invariant (LTI) feedback strategy. In the parlance of finance, the LTI strategy is said to be inefficient. The third collection of results in this dissertation, called Frequency-Based Betting, is focused on investigating how optimization and expected logarithmic growth performance vary with respect to betting frequency and on how our formulation and results apply to the stock market. Going beyond existing literature, in this part of the work, the frequency, or equivalently the number of stages n between trades, is included as an additional optimization parameter in our analysis. For a single stock, in the absence of transaction costs, we show that high-frequency trading is unbeatable in the sense of expected logarithmic growth. Moreover, we prove that if a stock satisfies a certain“sufficient attractiveness” condition, then the buy-and-hold strategy with n > 1 can match the performance of the high-frequency strategy with n = 1. Subsequently, when we generalize the notion of sufficient attractiveness from the single-stock case to a portfolio with multiple risky assets, a similar result is obtained. One highlight in this part of the dissertation involves the notion of a “dominant asset” which we define. When such an asset is present in the portfolio, we prove that the optimal performance requires putting “all eggs in one basket.” As a consequence, we see that the performance of the high-frequency trader is matched by that of the buy and holder. The final collection of results in this dissertation is motivated by the fact that a trader’s interactions with the market are not instantaneous. This leads us to extend our frequency-based framework to include delay in trade execution. For the case when a single unit of delay is present, in contrast to existing literature on Kelly Betting, it turns out that bankruptcy is a distinct possibility. This leads to a problem formulation in which the no-bankruptcy issue is cast as a state positivity problem. Subsequently, we prove two theorems. The first theorem gives sufficient conditions for avoidance of bankruptcy and the second gives necessary conditions. Some other technical results regarding state positivity are given as enrichments to the theory; e.g., we provide an example which suggests that when delay is present, the buy-and-hold strategy can achieve strictly higher performance than high-frequency trading.
... The objective φ evaluates the mean-semideviation risk measure ρ (·) E {·}+cR ((·) − E {·}) L p at F (·, W ), i.e., φ (·) ≡ ρ (F (·, W )) [22]. The functional ρ generalizes the well-known mean-uppersemideviation [38], which is recovered by choosing R (·) ≡ (·) + max {·, 0}, and is one of the most popular risk-measures in theory and practice [2,9,13,24,30,31,33,34]. For c ∈ [0,1], ρ is a convex risk measure [22], ( [38], Section 6) on Z p ; thus, φ in (1) is convex on R N , as well. ...
Preprint
Full-text available
We present Free-MESSAGEp, the first zeroth-order algorithm for convex mean-semideviation-based risk-aware learning, which is also the first three-level zeroth-order compositional stochastic optimization algorithm, whatsoever. Using a non-trivial extension of Nesterov's classical results on Gaussian smoothing, we develop the Free-MESSAGEp algorithm from first principles, and show that it essentially solves a smoothed surrogate to the original problem, the former being a uniform approximation of the latter, in a useful, convenient sense. We then present a complete analysis of the Free-MESSAGEp algorithm, which establishes convergence in a user-tunable neighborhood of the optimal solutions of the original problem, as well as explicit convergence rates for both convex and strongly convex costs. Orderwise, and for fixed problem parameters, our results demonstrate no sacrifice in convergence speed compared to existing first-order methods, while striking a certain balance among the condition of the problem, its dimensionality, as well as the accuracy of the obtained results, naturally extending previous results in zeroth-order risk-neutral learning.
... Selection of a particular risk measure is application-specific, depends on a particular problem context, and involves consideration of the complexity in interpretation of the risk measure and tractability of the resulting risk minimization problem. We have chosen variance for our analysis despite its known limitations Rockafellar et al. (2002) in addressing fat-tails and equal penalty to ups and downs of a distribution. While latter seems of a smaller problem in our setting that is intended to address the concerns of both buyer and seller (ups for a seller are downs for a buyer and vice versa), the fat-tail limitation is accepted assuming the interpretation of variance as a risk proxy employed for benchmarking rather than a risk measure used in an application-specific formulation of an optimization problem. ...
Article
Full-text available
Global environmental goals and the Paris agreement declared the need to avoid dangerous climate change by reducing emissions of greenhouse gases with an ultimate goal to transform today’s policies and reach climate neutrality before the end of the century. In the medium to long-term, climate policies imply rising CO 2 price and consequent financial risk for carbon-intensive producers. In this context, there is a need for tools to buffer CO 2 prices within the period of transition to greener technologies when the emission offsetting markets expose high volatility. Contracts for optional future purchase of carbon credits could provide emitters with a cost-efficient solution to address existing regulatory risks. At the same time, this would help to create much needed financing for the projects generating carbon credits in the future. This work presents the concept of a flobsion—a flexible option with benefit sharing—and demonstrates its advantages in terms of risk reduction for both seller and buyer as compared to both a “do nothing” strategy (offsetting at future market price) and a traditional option with a fixed strike price. The results are supported analytically and numerically, employing as a benchmark the dataset on historical CO 2 prices from the European Emission Trading Scheme. Flobsion has the potential to extend the traditional option in financial applications beyond compliance markets.
... Furthermore, computing the variance or the standard deviation is mainly justified by its nice analytical, computational and statistical properties but is a ad-hoc procedure and it is not clear if not better methods could be used. To overcome these shortfalls, Rockafellar et al. (2002) developed a general axiomatic framework for static deviation measures; see also Rockafellar et al. (2006aRockafellar et al. ( ,2006bRockafellar et al. ( ,2006cRockafellar et al. ( ,2007Rockafellar et al. ( ,2008 Righi and Ceretta (2016) or Righi (2017). This work was inspired by the axiomatic construction of coherent and convex risk measures given in Artzner et al. (1999Artzner et al. ( ,2000, Föllmer and Schied (2002) and Frittelli and Rosazza Gianin (2002). ...
Preprint
In this paper we analyze a dynamic recursive extension (as developed in Pistorius and Stadje (2017)) of the (static) notion of a deviation measure and its properties. We study distribution invariant deviation measures and show that the only dynamic deviation measure which is law invariant and recursive is the variance. We also solve the problem of optimal risk-sharing generalizing classical risk-sharing results for variance through a dynamic inf-convolution problem involving a transformation of the original dynamic deviation measures.
... Fábián et al. implemented this cutting-plane approach, and found it highly effective. Applying (23) to the present, scaled-tail model (4) results the following cutting-plane representation: ...
Book
We formulate a portfolio planning model which is based on Second-order Stochastic Dominance as the choice criterion. This model is an enhanced version of the multi-objective model proposed by Roman, Darby-Dowman, and Mitra (2006); the model compares the scaled values of the different objectives, representing tails at different confidence levels of the resulting distribution. The proposed model can be formulated asrisk minimisation model where the objective function is a convex risk measure; we characterise this risk measure and the resulting optimisation problem. Moreover, our formulation offers a natural generalisation of the SSD-constrained model of Dentcheva and Ruszczynski (2006). A cutting plane-based solution method for the proposed model is outlined. We present a computational study showing: (a) the effectiveness of thesolution methods and (b) the improved modelling capabilities: the resulting portfolios have superior return distributions.
Article
Generating safe and non-conservative behaviors in dense, dynamic environments remains challenging for automated vehicles due to the stochastic nature of traffic participants' behaviors and their implicit interaction with the ego vehicle. This paper presents a novel planning framework, M ultipolicy A nd R isk-aware C ontingency planning (MARC), that systematically addresses these challenges by enhancing the multipolicy-based pipelines from both behavior and motion planning aspects. Specifically, MARC realizes a critical scenario set that reflects multiple possible futures conditioned on each semantic-level ego policy. Then, the generated policy-conditioned scenarios are further formulated into a tree-structured representation with a dynamic branchpoint based on the scene-level divergence. Moreover, to generate diverse driving maneuvers, we introduce risk-aware contingency planning, a bi-level optimization algorithm that simultaneously considers multiple future scenarios and user-defined risk tolerance levels. Owing to the more unified combination of behavior and motion planning layers, our framework achieves efficient decision-making and human-like driving maneuvers. Comprehensive experimental results demonstrate superior performance to other strong baselines in various environments.
Preprint
Full-text available
In this paper, we build on using the class of f-divergence induced coherent risk measures for portfolio optimization and derive its necessary optimality conditions formulated in CAPM format. We have derived a new f-Beta similar to the Standard Betas and previous works in Drawdown Betas. The f-Beta evaluates portfolio performance under an optimally perturbed market probability measure and this family of Beta metrics gives various degrees of flexibility and interpretability. We conducted numerical experiments using DOW 30 stocks against a chosen market portfolio as the optimal portfolio to demonstrate the new perspectives provided by Hellinger-Beta as compared with Standard Beta and Drawdown Betas, based on choosing square Hellinger distance to be the particular choice of f-divergence function in the general f-divergence induced risk measures and f-Betas. We calculated Hellinger-Beta metrics based on deviation measures and further extended this approach to calculate Hellinger-Betas based on drawdown measures, resulting in another new metric which we termed Hellinger-Drawdown Beta. We compared the resulting Hellinger-Beta values under various choices of the risk aversion parameter to study their sensitivity to increasing stress levels.
Article
Motion planning for autonomous robots and vehicles in presence of uncontrolled agents remains a challenging problem as the reactive behaviors of the uncontrolled agents must be considered. Since the uncontrolled agents usually demonstrate multimodal reactive behavior, the motion planner needs to solve a continuous motion planning problem under these behaviors, which contains a discrete element. We propose a branch Model Predictive Control (MPC) framework that plans over feedback policies to leverage the reactive behavior of the uncontrolled agent. In particular, a scenario tree is constructed from a finite set of policies of the uncontrolled agent, and the branch MPC solves for a feedback policy in the form of a trajectory tree, which shares the same topology as the scenario tree. Moreover, coherent risk measures such as the Conditional Value at Risk (CVaR) are used as a tuning knob to adjust the tradeoff between performance and robustness. The proposed branch MPC framework is tested on an autonomous vehicle planning problem in simulation, and on an autonomous quadruped robot alongside an uncontrolled quadruped in experiments. The result demonstrates interesting human-like behaviors, achieving a balance between safety and performance.
Article
This paper introduces a new dynamic portfolio performance risk measure called Expected Regret of Drawdown (ERoD) which is an average of the drawdowns exceeding a specified threshold (e.g. 20%). ERoD is similar to Conditional Drawdown-at-Risk (CDaR) which is the average of some percentage of the largest drawdowns. CDaR and ERoD portfolio optimization problems are equivalent and result in the same set of optimal portfolios. Necessary optimality conditions for ERoD portfolio optimization lead to Capital Asset Pricing Model (CAPM) equations. ERoD Beta, similar to the Standard Beta, relates returns of the securities and those of a market. ERoD Beta is equal to [average losses of a security over time intervals when market is in drawdown exceeding the threshold] divided by [average losses of the market in drawdowns exceeding the threshold]. Therefore, a negative ERoD Beta identifies a security which has positive returns when the market has drawdowns exceeding the threshold. ERoD Beta accounts only for time intervals when the market is in drawdown and conceptually differs from Standard Beta which does not distinguish up and down movements of the market. Moreover, ERoD Beta provides quite different results compared to the Downside Beta based on Lower Semi-deviation. ERoD Beta is conceptually close to CDaR Beta which is based on a percentage of worst case market drawdowns. However, ERoD Beta has some advantage compared to CDaR Beta because the magnitude of the drawdowns is known (e.g. exceeding a 20% threshold), while CDaR Beta is based on a percentage of the largest drawdowns with unknown magnitude. We have built a website reporting CDaR and ERoD Betas for stocks and the SP 500 index as an optimal market portfolio. The case study showed that CDaR and ERoD Betas exhibit persistence over time and can be used in risk management and portfolio construction.
Article
Full-text available
We revisit mean-risk portfolio selection in a one-period financial market where risk is quantified by a positively homogeneous risk measure . We first show that under mild assumptions, the set of optimal portfolios for a fixed return is nonempty and compact. However, unlike in classical mean-variance portfolio selection, it can happen that no efficient portfolios exist. We call this situation -arbitrage, and prove that it cannot be excluded—unless is as conservative as the worst-case risk measure. After providing a primal characterization of -arbitrage, we focus our attention on coherent risk measures that admit a dual representation and give a necessary and sufficient dual characterization of -arbitrage. We show that the absence of -arbitrage is intimately linked to the interplay between the set of equivalent martingale measures (EMMs) for the discounted risky assets and the set of absolutely continuous measures in the dual representation of . A special case of our result shows that the market does not admit -arbitrage for Expected Shortfall at level if and only if there exists an EMM such that .
Article
We give a dynamic extension result of the (static) notion of a deviation measure. We also study distribution-invariant deviation measures and show that the only dynamic deviation measure which is law invariant and recursive is the variance.
Article
In this paper we propose a new measure of information called failure extropy and its dynamic version, the dynamic failure extropy. Some of the properties of the proposed measure including its application are studied. We also discuss the stochastic ordering of failure extropy and dynamic failure extropy and certain results based on it. Moreover we develop some characterization results in terms of failure extropy of nth order statistic. Nonparametric estimators for the proposed measure is also obtained. Monte Carlo simulation study and illustration using real data set are done to verify the performance of the estimators.
Article
We propose a two-attribute procedure for selecting the most cost-effective transmission expansion alternative under risk. The procedure randomly generates sets of realization of random parameters of the problem and for each set an optimization model that considers potential expansion options is solved. The mean and variance of the objective function associated with each optimal solution are the selection attributes. The optimal solution that has the smallest mean and variance is deemed the best expansion alternative. The procedure guarantees the selected best alternative is indeed the best by a probability that satisfies or exceeds a user’s specified probability of correct selection. For numerical examples of this paper, the levels of load and generation, and availability of generators and lines are considered random variables. Numerical examples are used to demonstrate the advantages of the proposed selection criterion over using the traditional expected value selection criterion.
Thesis
Full-text available
The growing penetration of renewable energy sources in electricity systems requires adapting operation models to face the inherent variability and uncertainty of wind or solar generation. In addition, the volatility of fuel prices (such as natural gas) or the uncertainty of the hydraulic natural inflows requires to take into account all these sources of uncertainty within the operation planning of the generation system. Thus, stochastic optimization techniques have been widely used in this context. From the point of view of the system operation, the introduction of wind and solar generation in the mix has forced conventional generators to be subject to more demanding schedules from the technical point of view, increasing for example the number of start-up and shutdown decisions during the week, or having to face more pronounced ramps. From the point of view of the market, all these technical issues are transferred to the market prices that are subject to greater volatility. This thesis focuses on the problem of risk management using the Conditional Value at Risk (CVaR) as a coherent risk measure. The thesis presents a novel iterative method that can be used by a market agent to optimize its operating decisions in the short term when the uncertainty is characterized by a set of random variable scenarios. The thesis analyses how it is possible to decompose the problem of risk management by means of Lagrangian Relaxation techniques and Benders decomposition, and shows that the proposed iterative algorithm (Iterative-CVaR) converges to the same solution as under the direct optimization setting. The algorithm is applied to two typical problems faced by agents: 1) optimization of the operation of a combined cycle power plant (CCGT) that has to cope with the volatility in the spot market price to build the supply curve for the futures market, and 2) strategic unit-commitment model. In a second part of the thesis the problem of market equilibrium is studied to model the interaction between several generating companies with mixed generation portfolios (thermal, hydraulic and renewable). The thesis analyses how the Nash equilibrium solution is modified at different risk-aversion level of the risk of the agents. In particular, the thesis studies how the management of hydroelectric reservoirs ismodified along the annual horizon when agents are risk-averse, and it is compared with the risk-neutral solution that coincides with a centralized planning when the objective is the minimization expected operational cost.
Thesis
Full-text available
Selecting a production strategy for oil field development is complex because multiple uncertainties affect decisions. Project value is maximized when uncertainty is managed by: (1) acquiring information to reduce reservoir uncertainty; (2) defining a flexible production strategy, allowing system modifications as uncertainty unfolds over time; or (3) defining a robust production strategy, ensuring good performance without system modifications after production has started. However, decision-making is many times subjective and based on intuition or professional experience because of the lack of objective criteria in the literature. In this study, we aimed to provide easy-to-apply decision criteria, while maintaining the complexity of the problem, reducing the subjectivity of the: (1) construction and assessment of the risk curve; (2) selection of the production strategy; and (3) selection of actions to manage uncertainty. For the construction of the risk curve, we compared two techniques: the well-established Monte Carlo with joint proxy models, with the recently proposed discretized Latin Hypercube with geostatistics, presenting their strengths and limitations. For the selection of the production strategy, we proposed a new function that combines the wellknown expected value with lower and upper semi-deviations from a benchmark return, quantifying downside risk and upside potential of production strategies. We applied this function to select production strategies and to estimate the expected values of information, flexibility, and robustness. We selected actions to manage uncertainty using predefined candidate production strategies, optimized for representative models of the uncertain system. We proposed probabilistic-based decision structures to assess the potential for information, flexibility, and robustness, incorporating (1) characteristics of the field and the type of uncertainties; (2) available resources and costs; and (3) decision maker’s attitude and objectives. Finally, we proposed an integrated approach looking at project sensitivity to uncertainty and at the effects of uncertainties on production strategy selection. Thus, we identify the best course of action to manage uncertainty, either reducing it with information or protecting the system with robustness and flexibility.
Article
Full-text available
The volatility of returns is probably the most widely used risk measure for real estate. This is rather surprising since a number of studies have cast doubts on the view that volatility can capture the manifold risks attached to properties and corresponds to the risk attitude of investors. A central issue in this discussion is the statistical properties of real estate returns—in contrast to neoclassical capital market theory they are mostly non-normal and often unknown, which render many statistical measures useless. Based on a literature review and an analysis of data from Germany we provide evidence that volatility alone is inappropriate for measuring the risk of direct real estate.
Article
We propose a risk-aware motion planning and decision-making method that systematically adjusts the safety and conservativeness in an environment with randomly moving obstacles. The key component of this method is the conditional value-at-risk (CVaR) used to measure the safety risk that a robot faces. Unlike chance constraints, CVaR constraints are coherent, convex, and distinguish between tail events. We propose a two-stage method for safe motion planning and control: A reference trajectory is generated by using RRT* in the first stage, and then a receding horizon controller is employed to limit the safety risk by using CVaR constraints in the second stage. However, the second stage problem is nontrivial to solve, as it is a triple-level stochastic program. We develop a computationally tractable approach through 1) a reformulation of the CVaR constraints; 2) a sample average approximation; and 3) a linearly constrained mixed integer convex program formulation. The performance and utility of this risk-aware method are demonstrated via simulation using a 12-dimensional model of quadrotors.
Article
Full-text available
Market imperfections call into question the suitability of the CAPM for deriving the cost of capital. The valuation by incomplete replication introduces a valuation concept that takes capital market imperfections into account and derives the risk-adjusted cost of capital (or risk discounts) on the basis of corporate or investment planning and risk analysis. The risk measure is derived consistently (using risk analysis and Monte Carlo simulation) from the cash flows to be valued, that is, the earning risk. Historical stock returns of the valuation object are therefore not necessary. It can be shown that the valuation result of the CAPM can be derived using the approach of imperfect replication as a special case for perfect capital markets.
Book
Full-text available
Eine lehrreiche Geschichte über den Umgang mit Chancen und Gefahren (Risiken) der Unternehmensführung; und den hier vielfältigen Problemen in der Praxis (in einem fiktiven Dialog zwischen Vorstand und Risikomanager)
Article
In this paper, we consider cash management from a multidimensional perspective in which cost and risk are desired goals to minimize. Cash managers interested in minimizing risk need to select the most appropriate risk measure according to their particular needs. In order to assess the quality of alternative risk measures, we empirically compare eight different risk measures in terms of the combined cost–risk performance of a cash management model. To this end, we rely on goal programming to derive optimal solutions for cash management models. Our results show that risk measures based on cost deviations better capture risk in comparison to those based on a reference cash balance. The methodology proposed in this paper allows cash managers to propose and evaluate new risk measures.
Article
A method to hedge variable annuities in the presence of basis risk is developed. A regime-switching model is considered for the dynamics of market assets. The approach is based on a local optimization of risk and is therefore very tractable and flexible. The local optimization criterion is itself optimized to minimize capital requirements associated with the variable annuity policy, the latter being quantified by the Conditional Value-at-Risk (CVaR) risk metric. In comparison to benchmarks, our method is successful in simultaneously reducing capital requirements and increasing profitability. Indeed the proposed local hedging scheme benefits from a higher exposure to equity risk and from time diversification of risk to earn excess return and facilitate the accumulation of capital. A robust version of the hedging strategies addressing model risk and parameter uncertainty is also provided.
Article
Full-text available
We extend the definition of coherent risk measures, as introduced by Artzner, Delbaen, Eber and Heath, to general probability spaces and we show how to define such measures on the space of all random variables. We also give examples that relates the theory of coherent risk measures to game theory and to distorted probability measures. The mathematics are based on the characterisation of closed convex sets Pσ of probability measures that satisfy the property that every random variable is integrable for at least one probability measure in the set Pσ.
Article
Full-text available
In this paper we study both market risks and nonmarket risks, without complete markets assumption, and discuss methods of measurement of these risks. We present and justify a set of four desirable properties for measures of risk, and call the measures satisfying these properties “coherent.” We examine the measures of risk provided and the related actions required by SPAN, by the SEC/NASD rules, and by quantile-based methods. We demonstrate the universality of scenario-based methods for providing coherent measures. We offer suggestions concerning the SEC method. We also suggest a method to repair the failure of subadditivity of quantile-based methods.
Article
Full-text available
Expected shortfall (ES) in several variants has been proposed as remedy for the deficiencies of value-at-risk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the underlying loss distributions have discontinuities. In this case even the coherence property of ES can get lost unless one took care of the details in its definition. We compare some of the definitions of ES, pointing out that there is one which is robust in the sense of yielding a coherent risk measure regardless of the underlying distributions. Moreover, this ES can be estimated effectively even in cases where the usual estimators for VaR fail.
Article
Full-text available
The value-at-risk (VaR) and the conditional value-at-risk (CVaR) are two commonly used risk measures. We state some of their properties and make a comparison. Moreover, the structure of the portfolio optimization problem using the VaR and CVaR objective is studied. Keywords: Risk measures, Value-at-Risk, Conditional Value-at-Risk, Portfolio optimization Introduction Let Y be a random cost variable and let F Y be its distribution function, i.e. F Y (u) = PfY ug. Let F Gamma1 Y (v) be its right continuous inverse, i.e. F Gamma1 Y (v) = inffu : F Y (u)) ? vg: When no confusion may occur, we write simply F instead of F Y . For a fixed level ff, we define (as usual) the value-at-risk VaR ff as the ff-quantile, i.e. VaR ff (Y ) = F Gamma1 (ff): (1.1) The conditional value-at-risk CV aR ff is defined as the solution of an optimization problem CVaR ff (Y ) := inffa + 1 1 Gamma ff E [Y Gamma a] + : a 2 Rg: (1.2) Here [z] + = max(z; 0). Uryasev and Rockafellar (1999) have sho...
Article
Several books have recently been published describing applications of the theory of conjugate convex functions to duality in problems of optimization. The finite-dimensional case has been treated by Stoer and Witzgall [25] and Rockafellar [13] and the infinite-dimensional case by Ekeland and Temam [3] and Laurent [9]. However, these are long works concerned also with many other issues. The purpose of these lecture notes is to provide a relatively brief introduction to conjugate duality in both finite- and infinite-dimensional problems. The material is essentially to be regarded as a supplement to the book Convex Analysis [13]; the same approach and results are presented, but in a broader formulation and more selectively. However, the order of presentation differs from [13]. I have emphasized more from the very beginning the fundamental importance of the concepts of Lagrangian function, saddle-point and saddle-value. Instead of first outlining everything relevant about conjugate convex functions and then deriving its consequences for optimization, I have tried to introduce areas of basic theory only as they became needed and their significance for the study of dual problems more apparent. In particular, general results on the calculation of conjugate functions have been postponed nearly to the end. making it possible to deduce more complete versions of the formulas by means of the duality theory itself. I have also attempted to show just where it is that convexity is needed, and what remains true if certain convexity or lower-semicontinuity assumptions are dropped. The notation and terminology of [13] have been changed somewhat to make an easier introduction to the subject. Thus the concepts of “bifunction” and “convex process” have been omitted, even though they are needed in the larger picture to see how the results on optimization problems fit in with other aspects of duality that have long been a mainstay in mathematical analysis. The duality theorem for linear programming problems, for instance, turns out to be an analogue of an algebraic identity relating a linear transformation and its adjoint. For more on this point of view and its possible fertility for applications such as to mathematical economics, see [24]. The purpose of these lecture notes is to provide a relatively brief introduction to conjugate duality in both finite- and infinite-dimensional problems. The material is essentially to be regarded as a supplement to the book Convex Analysis [13]; the same approach and results are presented, but in a broader formulation and more selectively. However, the order of presentation differs from [13]. I have emphasized more from the very beginning the fundamental importance of the concepts of Lagrangian function, saddle-point and saddle-value. Instead of first outlining everything relevant about conjugate convex functions and then deriving its consequences for optimization, I have tried to introduce areas of basic theory only as they became needed and their significance for the study of dual problems more apparent. In particular, general results on the calculation of conjugate functions have been postponed nearly to the end. making it possible to deduce more complete versions of the formulas by means of the duality theory itself. I have also attempted to show just where it is that convexity is needed, and what remains true if certain convexity or lower-semicontinuity assumptions are dropped.
Article
Fundamental properties of conditional value-at-risk (CVaR), as a measure of risk with significant advantages over value-at-risk (VaR), are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence of models based on scenarios and finite sampling. CVaR is able to quantify dangers beyond VaR and moreover it is coherent. It provides optimization short-cuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out of reach. The numerical efficiency and stability of such calculations, shown in several case studies, are illustrated further with an example of index tracking.
Article
A new approach to optimizing or hedging a portfolio of financial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value-at-Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage firms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenario-based methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of finance.
Article
We study a space of coherent risk measures Mφ obtained as certain expansions of coherent elementary basis measures. In this space, the concept of “risk aversion function” φ naturally arises as the spectral representation of each risk measure in a space of functions of confidence level probabilities. We give necessary and sufficient conditions on φ for Mφ to be a coherent measure. We find in this way a simple interpretation of the concept of coherence and a way to map any rational investor's subjective risk aversion onto a coherent measure and vice-versa. We also provide for these measures their discrete versions M(N)φ acting on finite sets of N independent realizations of a r.v. which are not only shown to be coherent measures for any fixed N, but also consistent estimators of Mφ for large N.
Article
This paper examines numerical functionals defined on function spaces by means of integrals having certain convexity properties. The functionals are themselves convex, so they can be analysed in the light of the theory of conjugate convex functions, which has recently undergone extensive development. The results obtained are applicable to Orlicz space theory and in the study of various extremum problems in control theory and the calculus of variations.
Article
Formulas are derived in this paper for the conjugates of convex integral functionals on Banach spaces of measurable or continuous vector-valued functions. These formulas imply the weak compactness of certain convex sets of summable functions, and they thus have applications in the existence theory and duality theory for various optimization problems.They also yield formulas for the subdifferentials of integral functionals, as well as characterizations of supporting hyperplanes and normal cones.
Article
Value-at-Risk measures the potential loss on a portfolio, where the potential loss is linked directly to the probability of large, adverse movements in market prices. This paper considers four classes of Value-at-Risk model: variance-covariance models; historical-simulation models; Monte-Carlo simulation models; and extreme-value estimation models. Using portfolio data from all Australian banks over the past ten years, we compare the performance of specific implementations of each of the four Value-at-Risk model classes. Performance assessment is based on a range of measures that address the conservatism, accuracy and efficiency of each model.
Article
This paper studies the rank-dependent model of choice under uncertainty proposed by J. Quiggin in 1982 and elaborated by M. E. Yaari in 1984. First, a rigorous axiomatic foundation for the model is provided. A very close analogy with expected utility theory is drawn permitting a considerably simplified treatment. Risk aversion and its measurement are then studied; two characterizations, one weaker and one stronger, are presented in addition to the one considered by Yaari. Lastly, risk aversion and other properties of th model are related to empirically observed departures from expected utility maximizing behavior. Copyright 1987 by Royal Economic Society.
Article
This paper investigates the consequences of the following modification of Expected Utility theory: instead of requiring independence with respect to probability mixtures of risky prospects, require independence with respect to direct mixing of payments o f risky prospects. A new theory of choice under risk- a so-called Dual theory-is obtained. Within this new theory, the following questions are considered: (1) numerical representation of preferences; (2) properties of the utility function; ( 3) the possibility for resolving the "paradoxes" of Expected Utilit y theory; ( 4) the characterization of risk aversion; and (5) comparative statics. The paper ends with a discussion of other non-Expected Utility theories proposed recently. Copyright 1987 by The Econometric Society.
Some Remarks on the Value-at-Risk and the Conditional Value-at-Risk, in Prob-abilistic Constrainted Optimization
  • G C Pflug
G.C. Pflug, Some Remarks on the Value-at-Risk and the Conditional Value-at-Risk, in Prob-abilistic Constrainted Optimization: Methodology and Applications (S.P. Uryasev, ed.), Kluwer, 2000, 278–287.
  • R T Rockafellar
  • R J B Wets
R. T. Rockafellar, R. J.-B. Wets, Variational Analysis, Springer-Verlag, Berlin, 1998.
Draft: Coherent Risk Measures, lecture notes
  • F Delbaen
F. Delbaen, Draft: Coherent Risk Measures, lecture notes, Pisa, 2000.