Expected value theory is fanatical, but that’s a good thing

I recently wrote a philosophy paper that might be of interest to some EAs. The introduction is copied below, and the full paper available here.


Suppose you face the following moral decision.

Dyson’s Wager

You have $2,000 to use for charitable purposes. You can donate it to either of two charities.

The first charity distributes bednets in low-income countries in which malaria is endemic. With an additional $2,000 in their budget, they would prevent one additional death from malaria in the coming year. You are certain of this.

The second organisation does speculative research into how to do computations using ‘positronium’ - a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely. (Footnotes omitted—see full text.) From your perspective as a good epistemic agent, there is some tiny, non-zero probability that, with (and only with) your donation, this research would discover a method for stable positronium computation and would be used to bring infinitely many (or just arbitrarily many) blissful lives into existence.

What ought you do, morally speaking? Which is the better option: saving a life with certainty, or pursuing a tiny probability of bringing about arbitrarily many future lives?

A common view in normative decision theory and the ethics of risk—expected value theory—says that it’s better to donate to the speculative research. Why? Each option has some probability of bringing about each of several outcomes, and each of those outcomes has some value, specified by our moral theory. Expected value theory says that the best option is whichever one has the greatest probability-weighted sum of value—the greatest expected value (distinct from expected utility—see footnotes). Here, the option with the greatest expected value is donating to the speculative research (at least on certain theories of value—more on those in a moment). So, plausibly, that’s what you should do.

This verdict is counterintuitive to many. All the more counterintuitive is that it’s still better to donate to speculative research no matter how low the probability is (short of being 0). For instance, the odds of your donation actually making the research succeed could be 1 in 10^100. (10^100 is greater than the number of atoms in the observable universe). The chance that the research yields nothing at all would be 99.99… percent, with another 96 nines after that. And yet you ought to take the bet, despite it being almost guaranteed that it will actually turn out worse than the alternative; despite the fact that you will almost certainly have let a person die for no actual benefit? Surely not, says my own intuition. On top of that, suppose that $2,000 spent on preventing malaria would save more than one life. Suppose it would save a billion lives, or any enormous finite value. Expected value theory would say that it’s still better to take the risky bet—that it would be better to risk those billion or more lives for a miniscule chance at much greater value. But endorsing that verdict, regardless of how low the probability of success and how high the cost, seems fanatical.

That verdict does depend on more than just our theory of instrumental rationality, expected value theory. It also requires that our moral theory endorses totalism: that the ranking of outcomes can be represented by a total (cardinal) value of each outcome; and that this total value increases linearly, without bound, with the sum of value in all lives that ever exist. Then the outcome containing vastly more blissful lives is indeed a much better one than that in which one life is saved. And, as we increase the number of blissful lives, we can increase how much better it is without bound. No matter how low the probability of those many blissful lives, there can be enough such lives that the expected total value of the speculative research is greater than that of malaria prevention. But this isn’t a problem unique to totalism. When combined with expected value theory, analogous problems face most competing views of value (axiologies), including: averageism, pure egalitarianism, maximin, maximax, and narrow person-affecting views. Those axiologies all allow possible outcomes to be unboundedly good, so it’s easy enough to construct cases like Dyson’s Wager for each. I’ll focus on totalism here for simplicity, and also because it seems to me far more plausible than the others. But suffice it to say that just about any plausible axiology can deliver fanatical verdicts when combined with expected value theory.

A little more generally, we face fanatical verdicts if our theory of instrumental rationality (in conjunction with our theory of value) endorses Fanaticism. And to avoid fanatical verdicts it must, at minimum, avoid Fanaticism.

Fanaticism: For any (finite) probability ε > 0 (no matter how low), and for any finite value v on a cardinal scale, there is some value V which is large enough that: we are rationally required to choose the lottery L_risky over L_safe.
L_risky: (an outcome with) value V with probability ε; value 0 otherwise
L_safe: value v with probability 1

The comparison of lotteries L_risky and L_safe resembles Dyson’s Wager: one option gives a slim chance of an astronomical value V; the other a certainty of some modest value v. V need not be infinite, in case you think infinite value impossible. But v must be finite. And Fanaticism in this form implies the fanatical verdict in Dyson’s Wager, if we choose sufficiently many (perhaps infinitely many) blissful lives. Likewise, to reject the fanatical verdict in Dyson’s Wager, we must reject Fanaticism.

You might think it easy enough to reject Fanaticism. Many philosophers have done so, in the domains of practical reason and also moral decision-making. For instance, Bostrom (2009) presents a compelling reductio ad absurdum for all fanatical views in the prudential context. Bostrom (2011), Beckstead (2013), and Askell (2019) treat (a weak form of) Fanaticism as itself a reductio for theories in the moral context. Others propose theories of rationality by which we simply ignore small enough probabilities (e.g., D’Alembert 1761; Buffon 1777; Smith 2014; Monton 2019). And Tarsney (n.d.) goes to great lengths to develop a theory resembling expected value theory which specifically avoids Fanaticism.

Meanwhile, there are few defenders of Fanaticism. No philosopher I know of has explicitly defended it in print. And few philosophers have defended fanatical verdicts in cases like Dyson’s Wager, with the exception of Pascal (1669) himself and those who, I suspect reluctantly, endorse his conclusion. And even they accept it only as a consequence of expected value theory, not because it has good independent justification. Even to most who endorse them, I suspect that fanatical verdicts are seen as unfortunate skeletons in the closet of expected value theory.

I think that this situation is unfortunate. We have good reason to accept Fanaticism beyond just expected value theory. As I hope to show, there are compelling arguments in favour of Fanaticism in the moral context. As those arguments show, if we reject Fanaticism then we face disturbing implications.

The paper proceeds as follows. Section 2 addresses common motivations for rejecting Fanaticism. Section 3 introduces the necessary formal framework for what follows. Sections 4 through 6 present arguments in favour of Fanaticism, each premised on weaker claims than expected value theory, and each (by my reckoning) more compelling than the last. The first is a basic continuum argument. The second is driven by a basic assumption that we can put at least some value on each lottery we face. The third is that: to deny Fanaticism we must accept either ‘scale-inconsistency’ or an absurd sensitivity to small probability differences, both of which are implausible. And the final nail in the coffin is what I will call the Indology Objection (a cousin of Parfit’s classic Egyptology Objection) by which those who deny Fanaticism must make judgements which appear deeply irrational. Section 7 is the conclusion.


Read the rest here.