Causes and Uncertainty: Rethinking Value in Expectation

This is the first post in Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is to consider some alternatives to expected value maximization for cause prioritization and, at the same time, to explore the practical implications of a commitment to expected value maximization. Here, we provide a preview of the entire sequence.

1. Introduction

We want to help others as much as we can. Knowing how is hard: there are many empirical, normative, and decision-theoretic uncertainties that make it difficult to identify the best paths toward that goal. Should we be focused on sparing children from malaria? Reducing suffering on factory farms? Mitigating the threats associated with AI? Should we split our attention between all three? Something else entirely?

Here are two common responses to these questions:

  • We ought to set priorities based on what would maximize expected value.

  • Expected value maximization (EVM) supports prioritizing existential risk (x-risk) mitigation over all else.

This post introduces a sequence from Rethink Priorities’ Worldview Investigations Team (WIT) that examines these two claims. The sequence highlights some reasons for skepticism about them both—reasons stemming from significant uncertainty about the correct normative theory of decision-making and uncertainty about many of the parameters and assumptions that enter into expected value calculations. It concludes with some practical resources for navigating uncertainty. What tools can we create to reason more transparently about tough questions? And at the end of the day, how should we make decisions?

Accordingly, this sequence is a contribution to the discussion about macro-level cause prioritization—i.e., how we split resources between global health and development, animals, and existential risk.[1] Before we go any further, though, let’s be clear: this sequence does not defend a specific split for any particular actor or group. Instead, it tries to clarify certain fundamental issues that bear on the split through the following series of reports.

The sequence has three parts. In the first, we consider alternatives to EVM.

  • We start by noting that some plausible moral theories avoid EVM entirely: they offer an entirely different way to set priorities. If the standard view is that effectiveness should be measured in something like counterfactual welfare gains per dollar spent, our report on contractualism and resource allocation illustrates how we might set priorities if we were to measure effectiveness in something like “strength-adjusted moral claims addressed per dollar spent.”

  • In our report on risk and animals, we examine several ways of incorporating risk sensitivity into the comparisons between interventions to help numerous animals with a relatively low probability of sentience (such as insects) and less numerous animals of likely or all-but-certain sentience (such as chickens and humans). We show that while one kind of risk aversion makes us more inclined to help insects, two other kinds of risk aversion suggest the opposite.

  • We generalize this discussion of risk in our report on risk-aversion and cause prioritization. Here, we model the cost-effectiveness of different cause areas in light of several formal models of risk aversion, evaluating how various risk attitudes affect value comparisons and how risk attitudes interact with one another.

The second part of the sequence examines the claim that EVM robustly favors x-risk mitigation efforts over global health or animal welfare causes.

  • In our report on the common sense case for spending on x-risk mitigation, we consider a simple model for assessing x-risk mitigation efforts where the value is restricted to the next few generations. We show that, given plausible assumptions, x-risk may not be orders of magnitude better than our best funding opportunities in other causes, especially when evaluated under non-EVM risk attitudes.

  • We then explore a more complicated hypothesis about the future, the so-called “time of perils” (TOP) hypothesis, that is commonly used to claim that x-risk is robustly more valuable than other causes. We delineate a number of assumptions that go into the TOP-based case for focusing on x-risk and highlight some of the reasons to be uncertain about them.

  • As we reflect on the time of perils and more general risk structures, we investigate the value of existential risk mitigation efforts under different risk scenarios, different lengths of time during which risk is reduced, and a range of population growth cases. This report shows that the value of x-risk work varies considerably depending on the scenario in question and that value is only astronomical under a select few assumptions. Insofar as we don’t have much confidence in any particular scenario, it’s difficult to have much confidence in any particular estimate of the value of x-risk mitigation efforts.

  • Finally, in our report on uncertainty over time and Bayesian updating, we note the difficulty of comparing estimates from models with wildly different levels of uncertainty or ambiguity. We provide an empirical estimate of how uncertainty increases as time passes, showing how a Bayesian may put decreasing weight on longer-term estimates.

All this work culminates in the third part of the sequence, where we introduce a tool for comparing causes and Rethink Priorities’ leadership discusses the practicalities of decision-making under uncertainty.

  • Accordingly, we present a cross-cause cost-effectiveness model (CCM), a tool for assessing the value of different kinds of interventions and research projects conditional on a wide range of assumptions. This resource allows users to specify distributions over possible values of parameters and see the corresponding distributions of results.

  • Finally, to make the upshot of the above more concrete, we consider how Rethink Priorities should make decisions. Written by RP’s Co-CEOs, this post comments on the theory and practice of setting priorities when your own resources are on the line.

With this quick summary behind us, we’ll provide a bit more detail about the major narrative of the sequence.

2. Making Comparisons

To compare actions that would benefit humans and chickens, we must find some common currency for comparing human and chicken welfare. To compare sure-thing investments versus moonshots with potentially great payoffs, we need to consider how to weigh probabilities when making decisions. To compare actions with near-term versus long-term consequences, we need to consider how we should take into account our uncertainty about the far future. In some of these cases, it can feel as though we are dealing with different kinds of value (human and animal welfare); in others, we may be dealing with the same kind of value, but it’s mediated by different factors (probability and time).

It’s difficult to know how to make comparisons across these apparent differences. Still, these comparisons are all the more pressing in a funding-constrained environment, where investments in any given cause come at the expense of investments in others.

A standard approach for making these comparisons favors maximizing expected value (EVM). The expected value (EV) of an action takes the sum of the values of the possible consequences of that action weighted by the probabilities of those consequences coming to pass. EVM states that we ought to perform the action that has the highest EV (or one of the highest EV actions if there are ties).

This decision procedure tells us how to incorporate payoffs with uncertainties over those payoffs. Moreover, it promises a way to compare very different kinds of actions. For instance, while we may not have a precise theory about how to compare human and chicken welfare, we may not need one. Instead, we can simply perform sensitivity tests: we can consider our upper and lower bound estimates of the value of chicken welfare to see whether our decision turns on our uncertainty. If it doesn’t, we’re in the clear.

In addition to its pragmatic appeal, EVM also has the backing of formidable philosophical arguments purporting to show that it is a uniquely rational way of making decisions.[2] However, EVM has some highly unintuitive consequences,[3] many of which stem from a simple fact: the highest EV option needn’t be one where success is at all likely, as reductions to the probability of an outcome can always be compensated for by proportional increases in its value. For instance, if we assume that value is proportional to the number of individuals affected, it follows that as the number of individuals affected by an action increases, the probability of success of those actions can get proportionally smaller while maintaining the same EV. As a result, actions that can affect large numbers of individuals need only have a small probability of success in order to have higher EV than more sure-thing actions that affect fewer individuals.

Consider two examples of this problem. First, humans and other large animals are vastly outnumbered by insects, shrimps, and other invertebrates. Since it is less likely that the latter creatures are sentient,[4] it’s less likely that actions to benefit them will result in any value at all. However, given the sheer number of these invertebrates, even a small probability of sentience will produce the result that actions that would benefit invertebrates have higher EV than actions that would benefit humans and other large animals. The result is (what Jeff Sebo calls) the “rebugnant conclusion,” according to which we ought to redirect aid from humans toward insects and shrimps.

Comparing future and present people provides another example where large numbers tend to dominate EV calculations. One way to justify working on x-risk mitigation projects is based on the many future people they might allow to come into existence. If the human species were to persist for, say, another million years, the number of future people would be much larger than it would be if humanity were only to survive for another few centuries. So, let’s consider an action with a low probability of ensuring that humanity lasts another million years rather than a few hundred years. The value of success would be far greater than the value of a successful action that affects a relatively small number of present people. Therefore, actions that have a low probability of bringing about a positive, population-rich future will tend to have higher expected values than more sure thing actions that affect only (or primarily) present people.

You might already think that going all-in on shrimps or x-risk work is fanatical. (A view that’s compatible with wanting to spend significant resources on these causes!) If you don’t, then we can push this reasoning further. Suppose you’re approached by someone who claims to have magical powers and promises you an astronomical reward tomorrow in exchange for your wallet today. Given the potential benefits, EVM suggests that it would be irrational not to fork over your wallet even if you assign a miniscule (but non-zero) probability to their claims (i.e., you’re vulnerable to “Pascal’s mugging”). If there is some chance that panpsychism is true, where even microparticles are sentient, then we may need to devote all resources to improving their welfare, given how many of them there are. If having access to limitless free energy or substrates for digital minds would allow us to maximize the amount of value in the world, then research on those topics may trump all other causes. If we follow EVM to its logical end, then it’s rational to pursue actions that have the tiniest probabilities of astronomical value instead of actions that have sure but non-astronomical values.

This isn’t news. Indeed, some key figures in EA have expressed doubts about EVM for just these sorts of reasons. “Maximization is perilous,” Holden Karnofsky tells us, encouraging us not to take gambles that could result in serious harm. And as Toby Ord observes, “there are different attitudes you can take to risk and different ways that we can conceptualize what optimal behavior looks like in a world where we’re not certain of what outcomes will result from our actions… [Some risk-averse alternatives to EVM] have some credibility. And I think we should have some uncertainty around these things.” Presumably, therefore, at least some people in EA are open to decision theories that depart from EVM in one way or another.

To date, though, there hasn’t been much discussion of alternatives to EVM. It’s valuable, therefore, to explore alternatives to it. There are at least two different ways to challenge EVM. First, we might question its strict risk neutrality. Decision theorists have offered competing accounts of decision-making procedures that incorporate risk sensitivity. We explore three different kinds of reasonable risk aversion, showing that they yield significantly different results about which causes we ought to prioritize. Second, we might give some credence to a moral theory in which our responsibilities to others often prohibit us from maximizing value, such as contractualism.

Of course, many EAs are quite committed to EVM. And many EAs appear to be convinced that EVM has a specific practical implication: namely, that we ought to focus our resources on mitigating x-risk. If we stick with EVM, do we get that result? We’re skeptical, as there are many controversial assumptions required to secure it. Since expected values are highly sensitive to changes in our credences, the results of EVM are far less predictable than they have been assumed to be.

3. Beyond EV maximization

Our sequence begins by illustrating the importance of our moral theory in assessing what we ought to do. EVM fits naturally with welfare consequentialism, according to which the moral worth of an action is determined by its consequences for overall well-being. Even among philosophers, though, consequentialism is a minority view. Adopting a different moral theory might lead to quite different results. In our contractualism report, we consider the implications of one prominent alternative moral theory for cause prioritization. In brief, contractualism says that morality is about what we can justify to those affected by our actions. We argue that this theory favors spending on the surest GHD interventions over x-risk work and probably over most animal work even if the latter options have higher EV. Insofar as we place some credence in contractualism, we should be less confident that the highest EV action is, therefore, the best option. Even if you aren’t inclined toward contractualism, the point stands that uncertainty over the correct theory of morality casts doubt on a strategy of always acting on the recommendations of EVM.

We then turn to the second reason for giving up on EVM: namely, making room for some kind of risk-aversion. EVM is sensitive to probabilities in only one way: the probabilities of outcomes are multiplied by the value of those outcomes, the results of which are summed to yield the expected value. However, agents are often sensitive to probabilities in many other ways, many of which can be characterized as types of risk aversion. The EV calculations that we mentioned in the previous section (regarding invertebrates and x-risk) involve several kinds of uncertainty. First, some outcomes have a low probability of occurring. Second, there is uncertainty about how much our actions will change the relevant probabilities or values. Third, there is uncertainty about the probabilities and values that we assign to the various outcomes in question. We identify three corresponding kinds of risk-averse attitudes: risk aversion with respect to outcomes; risk aversion with respect to the difference our actions make; and aversion to ambiguous probabilities. You can be averse to any or all these kinds of uncertainty.

In our risk and animals report, we motivate sensitivity to these kinds of risk by using them to diagnose disagreements about the value of actions that would benefit humans compared to actions to help more numerous animals with relatively low probabilities of sentience (e.g., shrimps and insects). Here, the key uncertainties concern the sentience of the individuals benefited. If shrimps are sentient, then benefitting them has enormous payoffs, given how much more numerous they are. However, if shrimps aren’t sentient, we’ll be wasting our money on organisms without conscious experiences—and for whom, therefore, things can’t go better or worse. Because EVM is risk-neutral, it suggests that the gamble is worthwhile, so we ought to direct our charitable giving toward shrimps over people. We introduce three approaches to risk, including a novel form of risk aversion about the difference that our actions make. We argue that while one prominent form of risk aversion tells in favor of the rebugnant conclusion, other reasonable forms of risk aversion tell in favor of helping creatures of more certain sentience.[5]

We use this test case to motivate a more general discussion of risk. In our risk-aversion and cause prioritization report, we evaluate how various risk attitudes affect comparisons between causes and how risk attitudes interact with one another.

First, an agent who is risk-averse with respect to outcomes is motivated to avoid the worst-case states of the world more than she’s motivated to obtain the best possible ones. She places more decision weight on the potential bad outcomes of a decision and less weight on the good ones. There are several well-studied and well-motivated formal models of this kind of risk aversion. Recall that strict EVM treats changes in probabilities and values symmetrically. Risk-sensitive procedures break this symmetry, albeit in different ways. For example, in Buchak’s (2013) Risk-Weighted Expected Utility (REU), a risk weighting is applied to the probabilities of outcomes, such that the probabilities of worse outcomes are adjusted upward and better outcomes are adjusted downward.[6]

Similarly, when applying Bottomley and Williamson’s (2023) iteration of Weighted-Linear Utility Theory (WLU) under neutral assumptions about the present state of the world, larger amounts of value created are adjusted downward, such that very large value outcomes must have higher probabilities to compensate for smaller probabilities of causing large amounts of harm. Risk-aversion in this sense makes us even more inclined to favor work to reduce x-risk (since we’re even more motivated to avoid catastrophe) and more inclined to favor insects over people (since trillions of insects suffering would be really bad). However, it would also make us less inclined toward fanatical moonshots with very low probabilities of astronomical value, since these extremely positive outcomes are given increasingly less weight the more risk-averse you are.

A second kind of risk aversion concerns difference-making. An agent who’s risk-averse in this sense is motivated to avoid actions that cause things to be worse or that do nothing. She assigns decision weights not merely on the basis of the overall state of the world that results from her action but on the difference that her action makes. We believe this kind of risk aversion is both common and undertheorized.[7] Difference-making risk aversion (DRMA) makes us hesitant to undertake actions that have a high probability of inefficacy or making things worse. As a result, difference-making risk-weighted EV tends to favor helping humans over helping shrimps and helping actual versus merely possible people.

Lastly, we might be risk-averse about ambiguous or uncertain probabilities (Ellsberg 1961). An ambiguity-averse agent prefers to take gambles on bets where she is confident about the probabilities that she assigns. She prefers bets where she has good evidence about the probabilities of outcomes; she avoids bets when the probabilities she assigns are largely reflections of her ignorance. There are several strategies for penalizing ambiguity. Regardless of which we choose, risk aversion in this sense will penalize actions for which there is high variance in plausible probability assignments and for which EV results are highly sensitive to this variance.

4. Uncertainty about the effects of our actions

All these points aside, suppose we go in for EVM. Another aim of this sequence is to explore whether there’s a quick route from EVM to the view that x-risk work is clearly more cost-effective than all other opportunities.

Let’s again consider how EV calculations work. An EV calculation considers a partition of states that we take to be relevant to our decision-making. We then evaluate how good or bad things would be if we took each candidate action and ended up in that state. Finally, we ask what the probabilities of those states would be if we took each candidate action. For example, if you are deciding whether to study for an exam, two states would be relevant: either you will pass or you won’t. You consider what it would be like if you study and pass, study and fail, don’t study and pass, or don’t study and fail. Lastly, you estimate the effect of studying on your probability of passing or failing, how much more likely you are to pass if you study than if you don’t. You combine these to yield the EV of studying and the EV of not studying. Then, you compare the results and make a decision.

Cases like deciding whether to study for an exam are fairly treated as “small world” decision problems. We consider only the near-term and direct effects that our actions would have and these effects are relatively small. For example, when deciding to study, I don’t consider the effect that my passing the exam would have for global nuclear war. We typically have good purchase on the probabilities and the values involved in such a decision problem. However, that’s often far from true when we evaluate the long-run, indirect effects of our actions, as in the case of x-risk mitigation efforts.

With this in mind, there’s something appealing about simple models for assessing the value of x-risk mitigation efforts: we know that there are enormous uncertainties at every turn; so, if we can make a “common sense” case for working on x-risk, drawing only on premises about which we are confident, then perhaps we can ignore all the other complexities.

So, for instance, someone might want to argue that spending on x-risk mitigation is orders of magnitude better than spending on animals or global health even if the future isn’t enormous. Instead, the thought might be, we can get a winning EV for x-risk interventions just based on the interests of a few generations. However, in our report on the common sense case for x-risk, we show that, when you run the numbers, you don’t necessarily get that result. Instead, we see that plausible x-risk mitigation efforts have a competitive EV with animal causes and are likely no more than an order of magnitude more cost-effective than global health interventions. What’s more, high-risk, high-EV existential risk interventions probably don’t hold up to some plausible risk-sensitive attitudes.

Someone might argue that a fatal flaw of the common-sense view is that it doesn’t care about the long-run future, and it’s precisely the long-run future that’s supposed to make x-risk mitigation efforts so valuable. In order to predict the long-term value that our actions will create, though, we need to make substantive hypotheses about the causal structure of the world and what the future would be like if we didn’t act at all. Again, as noted above, the value of investing in AI alignment is much smaller if there will be a nuclear war in the near future. It’s much greater if AI alignment would lower the probability of nuclear war.

Arguments that AI risk mitigation actions have higher EV than alternatives often assume a particular view about the future, the so-called “time of perils” hypothesis. On this view, we are currently in a period of very high existential risk, including risk from AI. However, if we get transformative aligned AI, we’ll have access to a resource that will allow us to address the other existential threats we face. Thus, if we survive long enough to secure transformative aligned AI, the value of the long-run future is likely to be extremely large. In our report on the time of perils-based case for x-risk’s dominance, we show that many premises are probably required to make this story work—so many, in fact, and so uncertain in each case, that the probability of its coming to pass is low enough that betting on x-risk for this reason may amount to fanaticism.

As we reflect on the many other possible futures, it becomes important to model alternative risk trajectories. In our report on the value of extinction risk mitigation efforts, we investigate the value of such efforts under more realistic assumptions, like sophisticated risk structures, variable persistence, and different cases of value growth. This report extends the model developed by Ord, Thorstad, and Adamczewski. By enriching the base model, we are able to perform sensitivity analyses and can better evaluate when extinction risk mitigation could, in expectation, be overwhelmingly valuable, and when it is comparable to or of lesser value than the alternatives. Crucially, we show that the value of x-risk work varies considerably with different scenario specifications. Insofar as we don’t have much confidence in any one scenario, we shouldn’t have much confidence in any particular estimate of the value of extinction risk mitigation efforts.

We complete our discussion of uncertainty in our report on uncertainty over time and Bayesian updating. When deciding between actions with short-term versus long-term impacts, there’s a balance between certainty and expected value. Predictions of short-term impacts are more certain but might offer less value, while long-term impacts can have higher expected value but are grounded in less certain predictions. This report provides an empirical analysis of how uncertainty grows over a 1-20 year range using data from development economics RCTs. Through statistical predictions and Bayesian updating models, we demonstrate how uncertainty gradually increases over time, and how this leads to the expected value of long-term impacts diminishes as the forecast time horizon lengthens.

The upshot of these reports is fairly simple. There clearly are sets of assumptions where x-risk mitigation work has higher EV than the standard alternatives. However, there are other assumptions on which it doesn’t. And more generally, it’s difficult to make the case that it clearly beats all animal and global health work without relying on very low probability outcomes having large values.

5. Cross-Cause Cost-Effectiveness Model and How RP Should Make Decisions

Having considered the impact of risk aversion and explored the case for work on x-risk mitigation, we introduce our cross-cause cost-effectiveness model (CCM). This tool allows users to compare interventions like corporate animal welfare campaigns with work on AI safety, direct cash transfers with attempts to reduce the risk of nuclear war, and so on. Of course, the outputs depend on a host of assumptions, most of which we can’t explore in this sequence. We provide some views as defaults, but invite users to input their own. Furthermore, we allow users to input distributions of possible values to make it possible to see how uncertainties about parameters translate into uncertainties about results. So, this tool is not intended to settle many questions about resource allocation. Instead, it’s designed to help the community reason more transparently about these issues.

Finally, we turn to the hyper-practical. With all these uncertainties in mind, how should Rethink Priorities make decisions? Written by RP’s co-CEOs, this post comments on the theory and practice of setting priorities when your own resources are on the line.

Acknowledgments

The post was written by Bob Fischer and Hayley Clatterbuck. It’s a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you’re interested in Rethink Priorities’ work, please consider subscribing to our newsletter. You can explore our completed public work here.

  1. ^

    Though it’s an open question whether it’s useful to carve up the space of interventions this way, this sequence won’t question the division.

  2. ^

    For arguments in favor of EV maximization, see Carlsmith (2022).

  3. ^

    See Beckstead (2013, Chapter 6). For a defense of fanaticism, as well as an explanation of why EV maximization leads to it, see Wilkinson (2021). For an argument for the moral importance of EVM relative to one kind of risk aversion, see Greaves et al. (2022).

  4. ^

    That is, having the capacity for valenced, phenomenally conscious experiences such that they can suffer or feel pleasure.

  5. ^

    This has consequences for fanaticism as well. Suppose there are some things we could do that have a small probability of creating new kinds of individuals (such as digital minds) that are capable of astronomical value. EVM leads to the fanatical result that we ought to prioritize these projects over ones to help people that currently exist or are likely to exist (Wilkinson 2022). Uncertainty about whether our actions could create such individuals and whether these individuals would indeed be morally considerable should cause some risk-averse agents to discount fanatical outcomes.

  6. ^

    For example, one possible REU weighting squares the probabilities of better outcomes. Therefore, better-case outcomes need to be exponentially better in order to compensate for their smaller probabilities.

  7. ^

    Greaves, et al. discuss difference-making risk aversion and provide several arguments against it.