Article

Taking Risks behind the Veil of Ignorance

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A natural view in distributive ethics is that everyone’s interests matter, but the interests of the relatively worse off matter more than the interests of the relatively better off. I provide a new argument for this view. The argument takes as its starting point the proposal, due to Harsanyi and Rawls, that facts about distributive ethics are discerned from individual preferences in the “original position.” I draw on recent work in decision theory, along with an intuitive principle about risk taking, to derive the view.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... For instance, a slight variant of RDEU, prospect theory, is only defended as a descriptive theory [Tversky and Kahneman, 1992]. Furthermore, Buchak [2017] has also considered the theory in the setting of social choice, which is relevant to fair machine learning. The possible application of rank dependence in this context has been hinted at by other authors [Schmeidler, 1989, Quiggin, 2012, but not elaborated. ...
... The structural analogy is that a state w in a rational choice problem corresponds to an individual (subgroup) in social choice [Buchak, 2017] and a gamble then describes a social arrangement ("who gets what"). Expected utility theories ask the question how should an individual value a gamble? ...
... Due to the structural analogy, it is not surprising that similar answers have been given here. Most prominently, expected utility theory in rational choice has average utilitarianism as its social counterpart [Buchak, 2017]. The analogy also yields an interesting interpretation for probability: an individual considers its possible "future selves", which would result from each outcome, which makes the question of how to value the gamble equivalent to the problem of finding a fair distribution among those future selves. ...
Preprint
Full-text available
Machine learning typically presupposes classical probability theory which implies that aggregation is built upon expectation. There are now multiple reasons to motivate looking at richer alternatives to classical probability theory as a mathematical foundation for machine learning. We systematically examine a powerful and rich class of such alternatives, known variously as spectral risk measures, Choquet integrals or Lorentz norms. We present a range of characterization results, and demonstrate what makes this spectral family so special. In doing so we demonstrate a natural stratification of all coherent risk measures in terms of the upper probabilities that they induce by exploiting results from the theory of rearrangement invariant Banach spaces. We empirically demonstrate how this new approach to uncertainty helps tackling practical machine learning problems.
... Rather, Bob is risk-averse: for Bob, worse outcomes play a larger role in determining the value of a gamble than better ones. 8 We can model the disagreement between Alice and Bob in the framework of REU theory. According to REU theory, two agents with the same probability and utility function can disagree about the value of a gamble if they have different attitudes towards risk. ...
... However, as it stands, Ramsey's method does not work for REU maximizers. 8 Note that we assumed that Bob's utility function is a linear function of money. We could try to model risk averse agents within EU theory by concave utility functions. ...
... See[8] for an accessible introduction. REU theory builds on earlier work on rank-dependent utility theory, in particular[22,18,14].7 We stipulate that u(o 0 ) = 0. ...
Preprint
Full-text available
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expected utility maximizer.
... Rather, Bob is risk-averse: for Bob, worse outcomes play a larger role in determining the value of a gamble than better ones. 8 We can model the disagreement between Alice and Bob in the framework of REU theory. According to REU theory, two agents with the same probability and utility function can disagree about the value of a gamble if they have different attitudes towards risk. ...
... However, as it stands, Ramsey's method does not work for REU maximizers. 8 Note that we assumed that Bob's utility function is a linear function of money. We could try to model risk averse agents within EU theory by concave utility functions. ...
... See[8] for an accessible introduction. REU theory builds on earlier work on rank-dependent utility theory, in particular[22,18,14].7 We stipulate that u(o 0 ) = 0. ...
Article
Full-text available
Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can leverage this method to measure the subjective probabilities of a risk-weighted expected utility maximizer.
... More recently, Buchak (2016Buchak ( , 2017Buchak ( , 2019 has argued that other-regarding contexts require that we exhibit a high degree of risk-avoidance as a default. This default, she claims, should especially guide those of our decisions whose largest impacts are on future individuals, such as decisions about climate change. ...
... How does she arrive at this conclusion? According to Buchak (2016Buchak ( , 2017Buchak ( , 2019 we are morally required to choose in accordance with a risk attitude that is sensitive to the risk attitudes of the agents potentially affected by our decision. We ought not simply impose our own risk attitude on others. ...
Article
Full-text available
When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk‐avoidant risk function. This, in turn, has been claimed to require the use of a risk‐avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. I raise objections to the claim that respect for others' risk attitudes requires risk‐avoidance when choosing for future generations. In particular, I argue that there is no known principle of interpersonal aggregation that yields acceptable results in variable population contexts and is consistent with a plausible ideal of respect for others' risk attitudes in fixed population cases.
... Instead, this would require isolating the 'pure risk preferences' of each citizen from his von Neumann-Morgenstern function. Addressing the same problem, Buchak (2017) suggests that "[…] assigning individuals the default risk-attitude gandjourinterpersonal comparison of welfare -abstracting away from their actual risk-attitudes -is necessary to ensure that individual preferences reflect moral judgments". In her view, the default risk-attitude we should adopt when making choices for others is the most risk-avoidant of the reasonable risk-attitudes (2017). ...
... In her view, the default risk-attitude we should adopt when making choices for others is the most risk-avoidant of the reasonable risk-attitudes (2017). Hence, Buchak (2017) not only suggests isolating risk preferences of individuals (as Pivato [2013] does) but replacing them with the default risk-attitude. ...
Article
Full-text available
In 1953, and extended over the following two decades, John Harsanyi published a theorem suggesting that Bayesian rationality postulates together with interpersonal utility comparisons entail an average utilitarian theory. This article summarizes critique on key assumptions of his account. First, irrational and antisocial preferences entail undesirable consequences. Second, the von Neumann-Morgenstern utility function is a cardinal theory of utility. Third, rational, self-interested, and impartial parties choose acceptable moral principles. Fourth, the observer assigns an equal probability to all positions in society. Fifth, different observers have uniform extended preferences and no personal preferences. This summary is followed by a discussion of model extensions that aim at making welfare interpersonally comparable. These accounts are either based on Harsanyi’s original process of ‘imaginative empathy’ or a process of ‘deep imaginative empathy’, including a conceptualization based on life years in perfect utility.
... Some scholars have criticized this 'single-individual-decision-theoretic' approach to the original position on grounds that principled disagreements (what Ryan Muldoon calls 'disagreement in perspective') may still persist even behind the veil of ignorance, and hence 'the device of the 'veil of ignorance' in moral and political philosophy does not guarantee that all agents can be effectively reduced to a single agent selected at random' (Muldoon et al. 2014: 379). 3 See Harsanyi (1953Harsanyi ( , 1955Harsanyi ( , 1977, Rawls (1971Rawls ( [1999), Buchak (2017), Moehler (2018) and Stefansson (2021). ...
... Of course, since the redistributive tax is imposed only on MAG's and not on LAG's earned income, the particular way in which increasing the redistributive tax rate t induces each group to work less is different: for the members of MAG, a higher redistributive tax rate t disincentivizes to work because a higher redistributive tax rate means that they will earn less after-tax income for the same working time, which incentivizes them to allot more time to leisure instead of work; for the members of LAG, a higher redistributive tax rate t disincentivizes to work because, with a higher redistributive tax rate, a greater portion of MAG's earned income will be redistributed to LAG, which makes it possible for LAG to achieve the same level of disposable income through redistributive subsidy while working less. However, once the redistributive tax rate 18 Many people conflate well-being/welfare and primary social goods when commenting on the different principle (Moreno-Ternero and Roemer 2008;Buchak 2017;Gustafsson 2018). However, this is a key factor that must be kept distinct as completely different distributional prescriptions will follow depending on which conception of advantage one uses to apply different principles of distributional justice. ...
Article
Full-text available
The original position together with the veil of ignorance have served as one of the main methodological devices to justify principles of distributive justice. Most approaches to this topic have primarily focused on the single person decision-theoretic aspect of the original position. This paper, in contrast, will directly model the basic structure and the economic agents therein to project the economic consequences and social outcomes generated either by utilitarianism or Rawls’s two principles of justice. It will be shown that when the differences in people’s productive abilities are sufficiently great, utilitarianism dominates Rawls’s two principles of justice by providing a higher level of overall well-being to every member of society. Whenever this is the case, the parties can rely on the Principle of Dominance (which is a direct implication of instrumental rationality) to choose utilitarianism over Rawls’s two principles of justice. Furthermore, when this is so, utilitarianism is free from one of its most fundamental criticisms that it ‘does not take seriously the distinction between persons’ (Rawls 1971 [1999]: 24).
... Finally, and most importantly, the generalized Gini family seems especially relevant to the present discussion in light of recent work by Lara Buchak. Buchak (2017) defends the generalized Gini family by appealing to her structurally analogous theory of decision under uncertainty. Buchak (2013: sec 2.3) claims that her decision theory avoids the problem posed by Rabin's calibration theoremi.e. that it can accommodate reasonable aversion to smallstakes risks without implying absurd aversion to large ...
... We can, however, introduce a different calibration dilemma for weak egalitarianism, which seems to us even more forceful than our prioritarian dilemma. This result has a somewhat different setup, which will require us to consider several distributions 12 Buchak (2017), however, defends the generalized Gini family on what she calls prioritarian grounds. Since the issue of nomenclature is irrelevant to our argument, we have no interest in defending our characterization of prioritarianism as committed to additive separability (thereby excluding nonutilitarian members of the generalized Gini family). ...
Article
Full-text available
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several ‘calibration dilemmas’, in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities. We first lay out a series of such dilemmas for prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that they are subject to even more forceful calibration dilemmas than prioritarian theories. Finally, we show that our results challenge common utilitarian accounts of the badness of inequalities in resources.
... This general conclusion can be translated to specific objections to specific positions in the distributive justice literature that combine (sometimes implicitly) consequentialist and non-consequentialist elements. I will demonstrate this with respect to a recent argument by Lara Buchak (2017). ...
... One such example is Buchak's (2017) own attempt to argue for what she calls ''relative prioritarianism'' (roughly, the position according to which, when making distributional decisions, the interests of those who are relatively worse off should get higher weights than the interests of those who are relatively better off). Buchak shows that, given some plausible assumptions, veil of ignorance reasoning combined with the decision theory she proposed leads to relative prioritarianism. ...
Article
Full-text available
I discuss the trilemma that consists of the following three principles being inconsistent:The Common Principle: if one distribution, A, necessarily brings a higher total sum of personal value that is distributed in a more egalitarian way than another distribution, B, A is more valuable than B. (Weak) ex-ante Pareto: if one uncertain distribution, A, is more valuable than another uncertain distribution, B, for each patient, A is more valuable than B. Pluralism about attitudes to risk (Pluralism): the personal value of a prospect is a weighted sum of the values of the prospect’s outcomes, but the weight each outcome gets might be different from the probability the prospect assigns to the outcome.
... Our article, which delves into the behavior of agents in the presence of risk and uncertainty, aligns to some extent with the literature elucidating the decisions of agents behind the veil of ignorance. Recently, Buchak (2017) brings back the veil of ignorance argument, employing it to advocate for an intermediary viewpoint dubbed relative prioritarianism, which stands between Harsanyi's and Rawls's perspectives. Afterwards, Stefánsson (2021) considers the aversion to ambiguity and offers backing for a version of egalitarianism; it is emphasized that the veil of ignorance argument does not endorse utilitarianism or prioritarianism unless we assume that individuals are indifferent to ambiguity. ...
Article
Full-text available
The axiomatic foundations of Bentham and Rawls solutions are discussed within the broader domain of cardinal preferences. It is unveiled that both solution concepts share all four of the following axioms: Nonemptiness, Anonymity, Unanimity, and Continuity. In order to fully characterize the Bentham and Rawls solutions, three variations of a consistency criterion are introduced and their compatibility with the other axioms is assessed. Each expression of consistency can be interpreted as a property of decision-making in risky or uncertain environments.
... Risk finds widespread use in industrial contexts, such as engineering, banking, security and waste-management -indeed, in any industry that deals with potential harm or loss of valuable resources. Within philosophy, ethicists debate the effects of risk on right action (Thomson, 1983;Buchak, 2017;Thoma, 2019;Lee-Stronach, 2018), while epistemologists develop theories of knowledge on which knowledge is incompatible with high levels of risk in the epistemic realm (Pritchard, 2015(Pritchard, , 2016Navarro, 2019Navarro, , 2021. ...
Article
Full-text available
Three philosophical accounts of risk dominate the contemporary literature. On the probabilistic account, risk has to do with the probability of a disvaluable event obtaining; on the modal account, it has to do with the modal closeness of that event obtaining; on the normic account, it has to do with the normalcy of that event obtaining. The debate between these accounts has proceeded via counterexample-trading, with each account having some cases it explains better than others, and some cases that it cannot explain at all. In this article, we attempt to break the impasse between the three accounts of risk through a shift in methodology. We investigate the concept of risk via the method of conceptual reverse-engineering, whereby a theorist reconstructs the need that a concept serves for a group of agents in order to illuminate the shape of the concept: its intension and extension. We suggest that risk functions to meet our need to make decisions that reduce disvalue under conditions of uncertainty. Our project makes plausible that risk is a pluralist concept: meeting this need requires that risk takes different forms in different contexts. But our pluralism is principled: each of these different forms are part of one and the same concept, that has a ‘core-to-periphery’ structure, where the form the concept takes in typical cases (at its ‘core’) explains the form it takes in less typical cases (at its ‘periphery’). We then apply our findings to epistemic risk, to resolve an ambiguity in how ‘epistemic risk’ is standardly understood.
... What risk attitude to adopt when making decisions on behalf of another person while uncertain about their risk attitude is a tricky and important question, which I have not attempted to answer here. Some philosophers have claimed that we ought to be risk averse in such situations, 11 but further work is required to explore the full range of possible strategies. ...
... In such cases, views on whether deference is required differ. Buchak (2017a) takes it to be intuitive that "we take ourselves to be required to defer to" the patient's risk attitude (p. 632). ...
Article
Full-text available
A growing number of decision theorists have, in recent years, defended the view that rationality is permissive under risk: Different rational agents may be more or less risk‐averse or risk‐inclined. This can result in them making different choices under risk even if they value outcomes in exactly the same way. One pressing question that arises once we grant such permissiveness is what attitude to risk we should implement when choosing on behalf of other people. Are we permitted to implement any of the rationally permissible risk attitudes, is there some specific risk attitude that is required when choosing for others, or are we required to defer to the risk attitudes of the people on whose behalf we are choosing? This article elaborates on this question, explains its wider practical and theoretical significance, provides an overview of existing answers, and explores how to go about providing a more systematic account of how to choose on behalf of others in risky contexts.
... For sophisticated discussion of these questions, seeBuchak (2017; manuscript);Goodin (1982: chaps 8-9); andThoma (manuscript). It bears noting that the implications for our topic of many of results in this literature are far from obvious and require further philosophical work. ...
Article
Full-text available
How should policymakers respond to the risk of technological unemployment that automation brings? First, I develop a procedure for answering this question that consults, rather than usurps, individuals’ own attitudes and ambitions towards that risk. I call this the insurance argument. A distinctive virtue of this view is that it dispenses with the need to appeal to a class of controversial reasons about the value of employment, and so is consistent with the demands of liberal political morality. Second, I appeal to the insurance argument to show that governments ought not simply to provide those who are displaced by machines with unemployment benefits. Instead, it must offer re-training programmes, as well as enact more general macroeconomic policies that create new opportunities for employment. My contribution is important not only because it helps us to resolve a series of urgent policy disputes—disputes that have been discussed extensively by labour market economists and policymakers, but less so by political philosophers—but also because my analysis sheds light on more general philosophical controversies relating to risk.
... Peterson 2017). Other probabilistic but more risk-averse ways to handle uncertainty in the original position can be found in Buchak (2017), Stefánsson (2019). 7 See our discussion after Definition 7 (Sect. ...
Article
Full-text available
John Rawls famously argued that the Difference Principle would be chosen by any rational agent in the original position. Derek Parfit and Philippe Van Parijs have claimed, contra Rawls, that it is not the Difference Principle which is implied by Rawls’ original position argument, but rather the more refined Lexical Difference Principle. In this paper, we study both principles in the context of social choice under ignorance. First, we present a general format for evaluating original position arguments in this context. We argue that in this format, the Difference Principle can be specified in three conceptually distinct ways. We show that these three specifications give the same choice recommendations, and can be grounded in an original position argument in combination with the well-known maximin rule. Analogously, we argue that one can give at least four plausible specifications of the Lexical Difference Principle, which however turn out to give different recommendations in concrete choice scenarios. We prove that only one of these four specifications can be grounded in an original position argument. Moreover, this one specification seems the least appealing from the viewpoint of distributive justice. This insight points towards a general weakness of original position arguments.
... Besides the kind of prioritarianism we will discuss, there are, for instance, deontic prioritarianism (see, e.g.,Williams [2011] andNebel [2017]), Fred Feldman's baseline-dependent prioritarianism [see Feldman 2016 andHerlitz 2020] and Lara Buchak's 'relative prioritarianism'[Buchak 2017]. ...
Article
Full-text available
This paper shows that versions of prioritarianism that focus at least partially on well-being levels at certain times conflict with conventional views of prudential value and prudential rationality. So-called timeslice prioritarianism, and pluralist views that ascribe importance to timeslices, hold that a benefit matters more, the worse off the beneficiary is at the time of receiving it. We show that views that evaluate outcomes in accordance with this idea entail that an agent who delays gratification makes an outcome worse, even if it is better for the agent and worse for no one else. We take this to show that timeslice prioritarianism and some pluralist views violate Weak Pareto, and we argue that these versions of prioritarianism are implausible.
... For related uses of IP theory, see (e.g.,Levi, 1977;Gajdos and Kandil, 2008). Debates about appropriate epistemic states (e.g.,Buchak, 2017;Stefánsson, 2019) and decision-theoretic principles (e.g.,Kurtulmuş, 2012;Liang, 2017;Gustafsson, 2018) behind the veil remain active. Stefánsson (2019), for example, considers ambiguity averse preferences behind the veil, and finds such preferences support a form of egalitarianism.Liang (2017), to take another example, employs cumulative prospect theory and finds an optimal form of inequality. ...
Article
Full-text available
Epistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a "veil of ignorance," for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner's probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner's theorem assumes that a social planner tasked with distributing income to individuals in a population is "completely ignorant" about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner's result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner's conclusion be maintained in this setting?
... One potential motivation for RDU over two-factor views is that, because we are simply applying different positive weights to the marginal welfare of each individual, we clearly avoid any charge of 'leveling down': unlike on two-factor views, there is nothing even pro tanto good about reducing the welfare of a better-off individual-it is simply less bad than reducing the welfare of a worse-off individual. 17 Versions of rank-discounted utilitarianism have been discussed and advocated under various names in both philosophy and economics, e.g. by Asheim and Zuber (2014) and Buchak (2017). In these contexts, the RDU value function is generally taken to have the following form: ...
Preprint
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a large future population, while non-additive axiologies need not. We show, however, that when there is a large enough 'background population' unaffected by our choices, a wide range of non-additive axiologies converge in their implications with some additive axiology -- for instance, average utilitarianism converges to critical-level utilitarianism and various egalitarian theories converge to prioritiarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from astronomical scale, and other arguments in practical ethics that seem to presuppose additive separability, may be truth-preserving in practice whether or not we accept additive separability as a basic axiological principle.
... This same problem of evidential redundancy plagues debunking arguments which assume from the outset that morality is risk averse (or that it is risk seeking). For instance, we might follow Buchak (2017) in arguing that when we don't know what the risk preferences of others are, we should err on the side of caution and choose the less risky option. We could then conclude that the risk seeking intuition in Small Ship is debunked. ...
Thesis
Debunking arguments use empirical evidence about our moral beliefs - in particular, about their causal origins, or about how they depend on various causes - in order to reach an epistemic conclusion about the trustworthiness of such beliefs. In this thesis, I investigate the scope and limits of debunking arguments, and their implications for what we should believe about morality. I argue that debunking arguments can in principle work - they are based on plausible epistemic premises, and at least some of them avoid putative problems concerning regress and redundancy. However, I also argue that some debunking arguments fall short because they are insufficiently supported by the empirical evidence. By considering different objections, analyses, and a case study, I explore the conditions for a successful debunking argument. Chapter 1 starts by providing an overview of debunking arguments - their structure, their variations, and common objections to them. Chapter 2 defends debunking arguments against counterexamples to their epistemic premises - counterexamples which, if effective, will show that debunking arguments cannot work in principle. I argue against the use of these counterexamples, and contend that we cannot merely deny the epistemic premises of a debunking argument. Chapter 3 defends debunking arguments against three further objections concerning how such arguments fit into the web of beliefs. The regress objection contends that debunking arguments make assumptions that commit us to a problematic regress. The findings redundancy objection contends that the empirical findings are unnecessary in a debunking argument. The argument redundancy objection alleges that debunking arguments assume what they set out to prove. Even if debunking arguments are in principle viable, some of them still fail because of poor empirical support. Chapter 4 argues that some global, evolutionary debunking arguments fall short - roughly because we are poorly placed to observe, intervene on, and predict what would happen from different evolutionary causes. In contrast, experimental evidence from moral psychology and behavioural economics will be better placed to support a debunking argument. Chapter 5 considers a novel debunking argument based on evidence of this kind - which shows how we overweight low probabilities in our decision-making, and underweight moderate to high probabilities. I explore debunking and vindicating arguments concerning our intuitions about risky aid. Chapter 6 proposes a Bayesian analysis of debunking arguments, which can guide us in revising our beliefs in response to debunking. I highlight further conditions for debunking to work, and propose a quantitative method for integrating different kinds of evidence in order to arrive at a rational credence about morality in light of debunking.
... This means that we assume that the veil of ignorance argument concerns only how welfare should be distributed (and not which rights and liberties we should grant people), which makes the argument more limited than the version developed byRawls. 7 The presentation closely followsBuchak's (2017).8 That is, unique up to a point of scale and starting point. ...
Article
Full-text available
The veil of ignorance argument was used by John C. Harsanyi to defend Utilitarianism and by John Rawls to defend the absolute priority of the worst off. In a recent paper, Lara Buchak revives the veil of ignorance argument, and uses it to defend an intermediate position between Harsanyi’s and Rawls’ that she calls Relative Prioritarianism. None of these authors explore the implications of allowing that agent’s behind the veil are sensitive to ambiguity. Allowing for aversion to ambiguity—which is both the most commonly observed and a seemingly reasonable attitude to ambiguity—however supports a version of Egalitarianism, whose logical form is quite different from the theories defended by the aforementioned authors. Moreover, it turns out that the veil of ignorance argument neither supports standard Utilitarianism nor Prioritarianism unless we assume that rational people are insensitive to ambiguity.
Article
Full-text available
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction appears to be practically important: among other things, additive axiologies generally assign great importance to large changes in population size, and therefore tend to strongly prioritize the long-term survival of humanity over the interests of the present generation. Non-additive axiologies, on the other hand, need not assign great importance to large changes in population size. We show, however, that when there is a large enough `background population' unaffected by our choices, a wide range of non-additive axiologies converge in their implications with additive axiologies—for instance, average utilitarianism converges with critical-level utilitarianism and various egalitarian theories converge with prioritarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from the scale of potential future populations for the astronomical importance of avoiding existential catastrophe, and other arguments in practical ethics that seem to presuppose additive separability, may succeed in practice whether or not we accept additive separability as a basic axiological principle.
Article
Suppose an agent is choosing between rescuing more people with a lower probability of success, and rescuing fewer with a higher probability of success. How should they choose? Our moral judgments about such cases are not well-studied, unlike the closely analogous non-moral preferences over monetary gambles. In this paper, I present an empirical study which aims to elicit the moral analogues of our risk preferences, and to assess whether one kind of evidence—concerning how they depend on outcome probabilities—can debunk them. I find significant heterogeneity in our moral risk preferences—in particular, moral risk-seeking and risk-neutrality are surprisingly popular. I also find that subjects’ judgments aren't probability-dependent, thus providing an empirical defense against debunking arguments from probability dependence.
Article
Longtermists argue we should devote much of our resources to raising the probability of a long happy future for sentient beings. But most interventions that raise that probability also raise the probability of a long miserable future, even if they raise the latter by a smaller amount. If we choose by maximising expected utility, this isn’t a problem; but, if we use a risk-averse decision rule, it is. I show that, with the same probabilities and utilities, a risk-averse decision theory tells us to hasten human extinction, not delay it. What’s more, I argue that morality requires us to use a risk-averse decision theory. I present this not as an argument for hastening extinction, but as a challenge to longtermism.
Article
This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second, we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest that we should not work to hasten human extinction, even if significant risk aversion is warranted.
Article
Full-text available
According to a common judgement, a social planner should often use a lottery to decide which of two people should receive a good. This judgement undermines one of the best-known arguments for utilitarianism, due to John C. Harsanyi, and more generally undermines axiomatic arguments for utilitarianism and similar views. In this paper we ask which combinations of views about (a) the social planner’s attitude to risk and inequality, and (b) the subjects’ attitudes to risk are consistent with the aforementioned judgement. We find that the class of combinations of views that can plausibly accommodate this judgement is quite limited. But one theory does better than others: the theory of chance-sensitive utility.
Article
A common view has it that since we are far likelier to be killed in some road or household accident than in a terror attack, our fear of the latter is exaggerated. I argue that terrorism's relatively limited death toll need not mean that fearing it is unreasonable, nor does it immediately imply that counter‐terrorism policies are unjustified – whatever other, legitimate concerns these policies give rise to. First, I argue that in the special case of terrorism, it is misleading to focus on risk per capita, as critics typically do. Second, while terrorism has a probabilistic component which should be relevant to decision‐making, risk is not entirely or even primarily what terrorism is all about. Third, I argue that fearing terrorism may be reasonable even while recognizing the small probability of personal harm. Due to terrorism's random character, the belief that one will escape harm rests on little more than statistical evidence. As I explain, this leaves some room for reasonable doubt, and a justified level of fear.
Article
Full-text available
I defend a weak version of the Pigou–Dalton principle for chances. The principle says that it is better to increase the survival chance of a person who is more likely to die rather than a person who is less likely to die, assuming that the two people do not differ in any other morally relevant respect. The principle justifies plausible moral judgements that standard ex post views, such as prioritarianism and rank-dependent egalitarianism, cannot accommodate. However, the principle can be justified by the same reasoning that has recently been used to defend the core axiom of ex post prioritarianism and egalitarianism, namely, Pigou–Dalton for well-being. The arguably biggest challenge for proponents of Pigou–Dalton for chances is that it violates state dominance for social prospects. However, I argue that we have independent reason for rejecting state dominance for social prospects, since it prevents a social planner from properly respecting people's preferences.
Article
Full-text available
Suppose we want to do the most good we can with a particular sum of money, but we cannot be certain of the consequences of different ways of making use of it. This article explores how our attitudes towards risk and ambiguity bear on what we should do. It shows that risk-avoidance and ambiguity-aversion can each provide good reason to divide our money between various charitable organizations rather than to give it all to the most promising one. It also shows how different attitudes towards risk and ambiguity affect whether we should give to an organization which does a small amount of good for certain or to one which does a large amount of good with some small or unknown probability.
Article
No existing normative decision theory adequately handles risk. Expected Utility Theory is overly restrictive in prohibiting a range of reasonable preferences. And theories designed to accommodate such preferences (for example, Buchak's (2013) Risk‐Weighted Expected Utility Theory) violate the Betweenness axiom, which requires that you are indifferent to randomizing over two options between which you are already indifferent. Betweenness has been overlooked by philosophers, and we argue that it is a compelling normative constraint. Furthermore, neither Expected nor Risk‐Weighted Expected Utility Theory allow for stakes‐sensitive risk‐attitudes—they require that risk matters in the same way whether you are gambling for loose change or millions of dollars. We provide a novel normative interpretation of Weighted‐Linear Utility Theory that solves all of these problems.
Article
Certain ethical views hold that we should pay more attention, even exclusive attention, to the worst-case scenario. Prominent examples include Rawls's Difference Principle and the Precautionary Principle. These views can be anchored in formal principles of decision theory, in two different ways. On the one hand, they can rely on ambiguity-aversion: the idea that we cannot assign sharp probabilities to various scenarios, and that if we cannot assign sharp probabilities, we should decide pessimistically, as if the probabilities are unfavorable. On the other hand, they can rely on risk-avoidance: the idea that we should pay more attention to worse scenarios, even when we can assign sharp probabilities. I distinguish these two foundations. I also show how they can be modified to support versions of these views that pay more but not exclusive attention to worst-case scenarios. Finally, I argue that risk-avoidance provides a superior foundation than ambiguity-aversion for the Difference Principle and the Precautionary Principle; in particular, it correctly identifies which ethical facts should matter to those who champion these principles.
Article
A lively topic of debate in decision theory over recent years concerns our understanding of the different risk attitudes exhibited by decision makers. There is ample evidence that risk-averse and risk-seeking behaviours are widespread, and a growing consensus that such behaviour is rationally permissible. In the context of clinical medicine, this matter is complicated by the fact that healthcare professionals must often make choices for the benefit of their patients, but the norms of rational choice are conventionally grounded in a decision maker’s own desires, beliefs and actions. The presence of both doctor and patient raises the question of whose risk attitude matters for the choice at hand and what to do when these diverge. Must doctors make risky choices when treating risk-seeking patients? Ought they to be risk averse in general when choosing on behalf of others? In this paper, I will argue that healthcare professionals ought to adopt a deferential approach, whereby it is the risk attitude of the patient that matters in medical decision making. I will show how familiar arguments for widely held anti-paternalistic views about medicine can be straightforwardly extended to include not only patients’ evaluations of possible health states, but also their attitudes to risk. However, I will also show that this deferential view needs further refinement: patients’ higher-order attitudes towards their risk attitudes must be considered in order to avoid some counterexamples and to accommodate different views about what sort of attitudes risk attitudes actually are.
Article
Full-text available
This paper combines considerations from ethics, medicine and public health policy to articulate and defend a systematic case for mask wearing mandates (MWM). The paper argues for two main claims of general interest in favour of MWM. First, MWM provide a more effective, just and fair way to tackle the ongoing COVID-19 pandemic than policy alternatives such as laissez-faire approaches, mask wearing recommendations and physical distancing measures. And second, the proffered objections against MWM may justify some exemptions for specific categories of individuals, but do not cast doubt on the justifiability of these mandates. Hence, unless some novel decisive objections are put forward against MWM, governments should adopt MWM.
Article
Full-text available
Suppose we assume that the parties in the original position took Kahneman and Tversky's prospect theory as constituting their general knowledge of human psychology that survives through the veil of ignorance. How would this change the choice situation of the original position? In this paper, I present what I call ‘prospect utilitarianism’. Prospect utilitarianism combines the utilitarian social welfare function with individual utility functions characterized by Kahneman and Tversky's prospect theory. I will argue that, once prospect utilitarianism is on the table, Rawls's original arguments in support of justice as fairness as well as his arguments against utilitarianism are, at best, inconclusive. This shows that how implausible a choice for utilitarianism in the original position is heavily depends on what one assumes to be general knowledge of human psychology that the original contracting parties know.
Article
Full-text available
In this essay I argue that even though egalitarianism and prioritarianism are different theories of social welfare, they can use the same social welfare measures. I present six different arguments for this thesis. The first argument is that conceptual connections between egalitarianism and prioritarianism ensure that any measure that works for either theory works for both. The second argument is that conditions necessary and sufficient to identify egalitarian and prioritarian measures, respectively, are equivalent. The third argument is that both egalitarianism and prioritarianism can use two standard measures, typically proposed for only one theory. The fourth to sixth arguments contend that four properties that have been proposed as distinctive of either egalitarian or prioritarian measures cannot distinguish between them. I conclude that any egalitarian measure is also a prioritarian measure, and vice versa.
Article
The original position has an elegance and power beyond most philosophical pictures. It has captured the attention of readers across the world through many generations of students, and is famous well beyond philosophical circles. Yet, as renowned as the original position has become, it is also typically misrepresented and misunderstood. In particular, John Rawls’ method of reasoning behind the veil of ignorance is frequently presented as drawing a conclusion mandated by rational choice theory. My aim, in this brief note, is to clarify the main purpose of the original position and to articulate its main defining features in contrast to this dominant misreading.
Book
Prioritarianism holds that improvements in someone's life (gains in well-being) are morally more valuable, the worse off the person would otherwise be. The doctrine is impartial, holding that a gain in one person's life counts exactly the same as an identical gain in the life of anyone equally well off. If we have some duty of beneficence to make the world better, prioritarianism specifies the content of the duty. Unlike the utilitarian, the prioritarian holds that we should not only seek to increase human well-being, but also distribute it fairly across persons, by tilting in favor of the worse off. A variant version adds that we should also give priority to the morally deserving – to saints over scoundrels. The view is a standard for right choice of individual actions and public policies, offering a distinctive alternative to utilitarianism (maximize total well-being), sufficiency (make everyone's condition good enough) and egalitarianism (make everyone's condition the same).
Article
Full-text available
Abstract Social decisions are often made under great uncertainty - in situations where political principles, and even standard subjective expected utility, do not apply smoothly. In the first section, we argue that the core of this problem lies in decision theory itself - it is about how to act when we do not have an adequate representation of the context of the action and of its possible consequences. Thus, we distinguish two criteria to complement decision theory under ignorance - Laplace’s principle of insufficient reason and Wald’s maximin criterion. After that, we apply this analysis to political philosophy, by contrasting Harsanyi’s and Rawls’s theories of justice, respectively based on Laplace’s principle of insufficient reason and Wald’s maximin rule - and we end up highlighting the virtues of Rawls’s principle on practical grounds (it is intuitively attractive because of its computational simplicity, so providing a salient point for convergence) - and connect this argument to our moral intuitions and social norms requiring prudence in the case of decisions made for the sake of others.
Article
Full-text available
This paper develops a form of moral actualism that can explain the procreative asymmetry. Along the way, it defends and explains the attractive asymmetry: the claim that although an impermissible option can be self-conditionally permissible, a permissible option cannot be self-conditionally impermissible.
Article
I evaluate two contractualist approaches to the ethics of risk: mutual constraint and the probabilistic, ex ante approach. After explaining how these approaches address problems in earlier interpretations of contractualism, I object that they fail to respond to diverse risk preferences in populations. Some people could reasonably reject the risk thresholds associated with these approaches. A strategy for addressing this objection is considering individual risk preferences, similar to those Buchak discusses concerning expected-utility approaches to risk. I defend the risk-preferences-adjusted (RISPREAD) contractualist approach, which calculates a population’s average risk preference and permits risk thresholds below that preference, only.
Article
This essay argues that when setting climate policy, we should place more weight on worse possible consequences of a policy, while still placing some weight on better possible consequences. The argument proceeds by elucidating the range of attitudes people can take towards risk, how we must make choices for people when we don't know their risk-attitudes, and the situation we are in with respect to climate policy and the consequences for future people. The result is an alternative to the Precautionary Principle, an alternative that gives similar policy recommendations in many cases but is also sensitive to the costs of precautions.
Article
Full-text available
Prioritarianism is a moral view that ranks outcomes according to the sum of a strictly increasing and strictly concave transformation of individual well-being. Prioritarianism is ‘welfarist’ (namely, it satisfies axioms of Pareto Indifference, Strong Pareto, and Anonymity) as well as satisfying three further axioms: Pigou–Dalton (formalizing the property of giving greater weight to those who are worse off), Separability, and Continuity. Philosophical discussion of prioritarianism was galvanized by Derek Parfit’s 1991 Lindley Lecture. Since then, and notwithstanding Parfit’s support, a variety of criticisms of prioritarianism have been advanced: by utilitarians (such as John Broome and Hilary Greaves), egalitarians (such as Lara Buchak; Michael Otsuka and Alex Voorhoeve; Ingmar Persson; and Larry Temkin), and sufficientists (Roger Crisp). In previous work, we have each endorsed prioritarianism. This article sets forth a renewed defense, in the light of the accumulated criticisms. We clarify the concept of a prioritarian moral view (here addressing work by David McCarthy), discuss the application of prioritarianism under uncertainty (herein of ‘ex post’ and ‘ex ante’ prioritarianism), distinguish between person-affecting and impersonal justifications, and provide a person-affecting case for prioritarianism. We then describe the various challenges mounted against prioritarianism – utilitarian, egalitarian, and sufficientist – and seek to counter each of them.
Article
This article argues that Lara Buchak’s risk-weighted expected utility (REU) theory fails to offer a true alternative to expected utility theory. Under commonly held assumptions about dynamic choice and the framing of decision problems, rational agents are guided by their attitudes to temporally extended courses of action. If so, REU theory makes approximately the same recommendations as expected utility theory. Being more permissive about dynamic choice or framing, however, undermines the theory’s claim to capturing a steady choice disposition in the face of risk. I argue that this poses a challenge to alternatives to expected utility theory more generally.
Article
John Rawls introduced the ‘veil of ignorance' in social contract theory to bring about a common conception of justice, and hypothesized that it will enable rational individuals to choose distributive shares on basis of ‘maximin' principle. R. E. Freeman conceptualised stakeholder fairness using the Rawlsian ‘veil of ignorance'. In contrast to Rawls' theory, John Harsanyi postulated that rational individuals behind the ‘veil of ignorance' will choose allocation to maximise expected utility. This article investigates how subjects choose allocations behind the ‘veil of ignorance,' in a laboratory experiment, and interprets the findings in light of stakeholder fairness. The ‘veil of ignorance' was induced on randomly paired and mutually anonymous subjects, who were asked to choose allocations in a simultaneous move discrete choice Nash demand game. Both ‘maximin' principle and expected utility maximisation was found to be used by the subjects. Choice of allocations where no one is worse off vis-à-vis status quo was salient. This is consistent with Freeman's Principle of Governance.
Article
Full-text available
Prioritarianism is a distinctive moral view. Outcomes are ranked according to the sum of concavely transformed well-being numbers—by contrast with utilitarianism, which simply adds up well-being. Thus, unlike utilitarians, prioritarians give extra moral weight to the well-being of the worse off. Unlike egalitarians, prioritarians endorse an axiom of person-separability: the ranking of outcomes is independent of the well-being levels of unaffected individuals. Unlike sufficientists, who give no priority to the worse-off if their well-being exceeds a “sufficiency” threshold, prioritarians always favor the worse-off in conflicts with those at higher well-being levels. Derek Parfit is prioritarianism’s most famous proponent. We have also defended prioritarianism. Not everyone is persuaded. Prioritarianism has been vigorously criticized, from a variety of perspectives, most visibly by John Broome, Campbell Brown, Lara Buchak, Roger Crisp, Hilary Greaves, David McCarthy, Michael Otsuka, Ingmar Persson, Shlomi Segall, Larry Temkin, and Alex Voorhoeve. In this Article, we answer the critics.
Article
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate.