Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem

This is the second post in a series. The first post is here and the third post is here. The fourth post is here and the fifth post is here.

3: Defusing the initial worry

The previous section lay out the initial worry: Since Pascal’s Mugger’s argument is obviously unacceptable, we should reject the long-termists’ argument as well; expected utility arguments are untrustworthy. This section articulates an important difference between the long-termist’s proposal and the Mugger’s, and elaborates on how expected-utility reasoning works.

The initial worry can be defused. There are pretty good arguments to be made that the expected utility of giving money to the Mugger is not larger than the expected utility of giving money to the long-termist, and indeed that the expected utility of giving money to the Mugger is not high at all, relative to your other options.

Think about all the unlikely things that need to be true in order for giving the Mugger $5 to really save 10^10^10^10 lives. There have to be other dimensions that are accessible to us. They have to be big enough to contain that many people, yet easily traversable so that we can affect that many people. The fate of all those people needs to depend on whether or not a certain traveller has $5. That traveler has to pick you, of all people, to ask for the money. There has to be no time to explain, nor any special gadget or ability the traveler can display to convince you. Etc.

Now consider another thing that could happen. You give the $5 to AMF instead, and your donation saves a child’s life. The child grows up to be a genius inventor, who discovers a way to tunnel to different dimensions. Moreover, the dimensions thus reached happen to contain 10^10^10^10 lives in danger, which we are able to save using good ol’ Earthling ingenuity…

It’s not obvious that you are more likely to save 10^10^10^10 lives by donating to the Mugger than you are to save that many lives by donating to AMF. Each possibility involves a long string of very unlikely events. The case could be made that the AMF scenario is more probable than the Mugger scenario; failing that, it’s at least not obvious that the Mugger scenario is more probable—and hence, it’s not obvious that the expected utility of paying the Mugger is higher than the expected utility of donating to AMF.

These considerations generalize. Thus far we have ignored the fact that each action has multiple possible outcomes. But really, each possible action should be represented on the chart by a big set of stars, one for each possible outcome of that action:

Each action has a profile of possible outcomes. The expected utility of an action is the sum of the expected utilities of all its possible outcomes:

A is the action we are contemplating, i ranges over all possible outcomes, P(i,A) is the probability of getting outcome i, supposing we do A, and U(i,A) is the utility of getting outcome i supposing we do A.

Thus, to argue that the expected utility of giving him $5 is higher than the expected utility of giving the money to AMF, the Mugger would need to give us some reason to think that the profile of possible outcomes of the former adds up to something more than the profile of possible outcomes of the latter. He has utterly failed to do this—even the sort of outcome he draws our attention to, the outcome in which you save 10^10^10^10 lives, is arguably more likely to occur via donating to AMF.

Can we say the same about the long-termist? No. Again, using AMF and x-risk as an example, it’s possible that we would save a child’s life from malaria, who would then grow up to prevent human extinction. But it’s much less probable that we’ll prevent human extinction via saving a random child than we will by trying to do it directly—the chain of unlikely events is long in both cases, but clearly longer in the latter.

So the initial worry can be dispelled. Even though the probability that the Mugger is telling the truth is much larger than 0.1^10^10^10, this isn’t enough to make donating to him attractive to an expected-utility maximizer. In fact, we have no particular reason to think that donating to the Mugger is higher expected-utility than donating to e.g. AMF, and arguably the opposite is true: Arguably, by donating to AMF, we stand an even higher chance of achieving the sort of outcome that the Mugger promises. Moreover, we can’t say the same thing about the long-termist; we really are more likely to prevent human extinction by donating to a reputable organization working on it directly than by donating to AMF. So we’ve undermined the Mugger’s argument without undermining the long-termist’s argument.

You may not be satisfied with this solution—perhaps you think there is a deeper problem which is not so easily dismissed. You are right. The next section explains why.

4. Steelmanning the problem: funnel-shaped action profiles

The problem goes deeper than the initial worry.

The “Pascal’s Mugging” scenario was invented recently by Eliezer Yudkowsky. But the more general philosophical problem of how to handle tiny probabilities of vast utilities has been around since 1670, when Blaise Pascal presented his infamous Wager. pointed out that the expected utility of an infinitely good possibility (e.g. going to heaven for eternity) is going to be larger than the expected utility of any possible outcome of finite reward, so long as the probability of the former is nonzero. So if you obey expected utility calculations, you’ll be a “fanatic”, living your entire life as if the only outcomes that matter are those of infinite utility. This is widely thought to be absurd, and so has spawned a large literature on infinite ethics.

A common response is to say that infinities are weird and leave this to the philosophers and mathematicians to sort out. But as the St. Petersburg paradox in 1713 shows, we still get problems even if we restrict our attention to outcomes of finite utility, and even if we restrict our attention to only finitely many possible outcomes. Consider the St. Petersburg Game. A fair coin is tossed repeatedly until it comes up heads, at toss n. When heads finally occurs, the game ends and produces an outcome of utility 2^n: showed here:

If we consider all possible outcomes, the expected utility of playing the St. Petersburg game is infinite: There are infinitely many stars, and each one adds +1 to the total expected utility for playing the game. So, as with the original Pascal’s Wager involving possibilities of infinite reward, if we obey expected utility calculations we will be “fanatics” about opportunities to play St. Petersburg games. If we restrict our attention to the first N possible outcomes, the expected utility of playing the St. Petersburg game will be N—which is still problematic. People trying to sell us a ticket to play this game shouldn’t be able to make us pay whatever they want simply by drawing our attention to additional possible outcomes.

It gets worse. There’s another game, the Pasadena Game, which is like the St. Petersburg game except that possible outcomes oscillate between positive and negative utility as you look further and further to the right on the chart. The expected utility of the Pasadena Game is undefined, in the following sense: Depending on which order you choose to count the possible outcomes, the expected utility of playing the Pasadena Game can be any positive or negative number.

Good thing we aren’t playing these games, right? Unfortunately, we are playing these games every day. Consider a real-life situation, such as the one discussed earlier about where to donate your $7,000. Consider one of the actions available to you in that situation—say, donating $7,000 to AMF. Imagine making the chart bigger and bigger as you consider more and more possible outcomes of that action: “What if all of the nets I buy end up killing someone? What if they save the life of someone who goes on to cure cancer? What if my shipment of nets starts world war three?” At first, as you add these possibilities to the chart, they make little difference to the total expected utility of the action. They are more improbable than they are valuable or disvaluable—farther to the right than they are above or below—so they add or subtract tiny amounts from the total sum. Here is what that would look like:

Eventually you would start to consider some extremely unlikely possibilities, like “What if the malaria nets save the life of a visitor from another dimension who goes on to save 10^10^10^10 lives…” and “What if instead the extradimensional visitor kills that many people?” Then things get crazy. These outcomes are extremely improbable, yes, but they are more valuable or disvaluable than they are improbable. (This key claim will be argued for more extensively in the appendix.) So they dominate the expected utility calculation. Whereas at the beginning, the expected utility of donating $7,000 to AMF was roughly equal to saving 1 life for sure, now the expected utility oscillates wildly from extremely high to extremely low, as you consider more and more outlandish possibilities. And this oscillation continues as long as you continue to consider more possibilities. Visually, the action profile for donating $7,000 to AMF is funnel-shaped:

Considering all possible outcomes, actions with funnel-shaped action profiles have undefined expected utility. If instead you only consider outcomes of probability p or higher for some sufficiently tiny p… then expected utility will be well-defined but it will be mostly a function of whatever is happening in the region near the cutoff. (The top-right and bottom-right corners of the funnel will weigh the most heavily in your decision-making.) For discussion of why this is, see appendix 8.1 (forthcoming).

So the problem posed by tiny probabilities of vast utilities is that all available actions have funnel-shaped action profiles, making a mockery out of expected utility maximization. The problem is not that it would be difficult to calculate expected utilities; the problem is that expected utility calculations would give absurd answers if calculated correctly. Even if expected utility maximization wouldn’t recommend giving money to the Mugger, it would probably recommend something similarly silly.

This is the second post in a series. The first post is here and the third post is here. The fourth post is here and the fifth post is here.


Notes:

10: O is the set of all possible outcomes. I assume that there are countably many possible outcomes. I don’t think the problems would go away if we were dealing with uncountably many.

11: Note that the deeper problem is also discussed by Yudkowsky in his original post. There are, as I see it, three levels to the problem: Level I, discussed in section 2. Level II, discussed in section 4, which is about funnel-shaped action profiles. Level III, mentioned in section 4, which is the fully general version of the problem where we consider all possible outcomes, including those involving infinite rewards.

12: The precise form this takes depends on some details. See Hájek (2012) and the SEP for further discussion.

13: See e.g. Bostrom (2011) for a good introduction and overview. For a cutting-edge demonstration of how paradoxical infinite ethics can get (impossibility results galore!) see Askell (2018)

14: The probability of landing 7 heads in a row is 0.5ˆ7 = 0.0078125. So the probability of getting at least one tails before then is 0.9921875. So if e.g. N = 1 billion, then you’ll pay hundreds of millions of dollars to play a game that has a 99% chance of paying less than $64.

15: See e.g. Hajek & Smithson (2012)

16: This assumes what I take to be the standard view, in which strictly speaking there are infinitely many possible outcomes consistent with your evidence at any given time. If instead you think that there really are only finitely many possibilities, then see the next sentence.

17: I call it “funnel-shaped” because that’s an intuitive and memorable label. It’s not a precisely defined term. Technically, the problem is a bundle of related problems: Expected utility being undefined, and/​or unduly sensitive to tiny changes in exactly which tiny possibilities you consider, and/​or infinite for every action.

18: To spell that out a bit: We’d like to say that the ideally correct way to calculate expected utilities would be to consider all infinitely many possibilities, but that of course for practical purposes we should merely try to approximate that by considering some reasonably large set of possibilities. With funnel-shaped action profiles, there seems to be no such thing as a reasonably large set of possibilities, and no good reason to approximate the ideal anyway, since the ideal calculation always outputs “undefined.”

19: For example, if some possibilities have infinite value, then if one of our actions is even the tiniest bit more likely to lead to infinite value than another action, we’ll prioritize it. If there is a possibility of infinite value for every available action, then the expected utilities of all actions will be the same. For more discussion of this sort of thing, see appendix 8.3 (forthcoming).


Bibliography for this section

Askell, Amanda. (2018) Pareto Principles in Infinite Ethics. 2018. PhD Thesis. Department of Philosophy, New York University.

-This thesis presents some compelling impossibility results regarding the comparability of many kinds of infinite world. It also draws some interesting implications for ethics more generally.

Hájek, Alan. (2012) Is Strict Coherence Coherent? Dialectica 66 (3):411-424.

–This is Hajek’s argument against regularity/​open-mindedness, based on (among other things) the St. Petersburg paradox.

Hájek, Alan and Smithson, Michael. (2012) Rationality and indeterminate probabilities. Synthese (2012) 187:33–48 DOI 10.1007/​s11229-011-0033-3

–The argument appears here also, in slightly different form.

Bostrom, N. “Infinite Ethics.” (2011) Analysis and Metaphysics, Vol. 10: pp. 9-59

–Discusses various ways to avoid “infinitarian paralysis,” i.e. ways to handle outcomes of infinite value sensibly.