Tiny Probabilities of Vast Utilities: A Problem for Long-Termism?

Suppose someone tries to convince you to give them $5, with the following argument:

Pascal’s Mugger: “I’m actually a traveller from another dimension, where 10^10^10^10 lives are in danger! Quick, I need $5 to save those lives, no time to explain! I don’t expect you to believe me, of course, but if you follow the expected utility calculation, you’ll give me the money—because surely the probability that I’m telling the truth, while tiny, is at least 1-in-10^10^10^10.”

Now suppose someone tries to convince you to donate $5 to their organization, with the following argument:

Long-termist: “My organization works to prevent human extinction. Admittedly, the probability that your donation will make the difference between success and failure is tiny, but if it does, then you’ll save lives! And surely the probability is greater than 1-in-. So you are saving at least 1 life, in expectation, for $5—much better than the Against Malaria Foundation!”

Obviously, we should refuse the Mugger. Should we also refuse the long-termist, by parallel reasoning? In general, do the reasons we have to refuse the Mugger also count as reasons to de-prioritize projects that seem high-expected-value but have small probabilities of success—projects like existential risk reduction, activism for major policy changes, and blue-sky scientific research?

This article explores that question, along with the more general issue of how tiny probabilities of vast utilities (i.e. vast benefits) should be weighed in our decision-making. It draws on academic philosophy as well as research from the effective altruism community.

Summary

There are many views about how to handle tiny probabilities of vast utilities, but they all are controversial. Some of these views undermine arguments for mainstream long-termist projects and some do not. However, long-termist projects shelter within the herd of ordinary behaviors: It is difficult to find a view that undermines arguments for mainstream long-termist projects without also undermining arguments for behaviors like fastening your seatbelt, voting, or building safer nuclear reactors.

Amidst this controversy, it would be naive to say things like “Even if the probability of preventing extinction is one in a quadrillion, we should still prioritize x-risk reduction over everything else…”

Yet it would also be naive to say things like “Long-termists are victims of Pascal’s Mugging.”

Table of contents:

1. Introduction and overview

2. The initial worry: Does giving to the Mugger maximize expected utility?

3. Defusing the initial worry

4. Steelmanning the problem: funnel-shaped action profiles

5. Solutions exist, but some of them undermine long-termist projects

6. Sheltering in the herd: a defense of mainstream long-termist projects

7. Conclusion

8. Appendix

9. Bibliography

2. The initial worry: Does giving to the Mugger maximize expected utility?

This section explains the problem in more detail, and in particular explains why “But the probability that Pascal’s Mugger is telling the truth is extremely low” isn’t a good solution.

Consider the following chart:

Each color is an action you are considering; each star is a possible outcome. Each possible outcome has a probability, i.e. the answer to the question “Supposing we do this action, what’s the probability of this happening?” Each possible outcome also has a utility, i.e. “Suppose we do it and this happens, how good would that be?”

If we obey expected utility calculations, we’ll be indifferent between, for example, getting utility 1 for sure and getting utility 100 with probability 0.01. Hence the blue lines: If we obey normal expected utility calculations, we’ll choose the action which is on the highest blue line.

You’ll notice that the red and blue stars are not on the chart. This is because they would be way off the currently shown portion. The long-termist caricatured earlier argues that the blue star should be placed at around —about thirty centimeters above the top of the chart—and hence that even if its probability is small, so long as it is above , it will be preferable to giving to AMF.

What about the Mugger? The red star should be placed far to the right on this chart; its probability is extremely tiny. How tiny do you think it is—how many meters to the right would you put it?

Even if you put it a billion light-years to the right on the chart, if you obey expected utility calculations, you’ll prefer it to all the other options here. Why? Because 10^10^10^10 is a very big number. The red star is farther above than it is to the right; the utility is more than enough to make up for the low probability.

Why is this so? Why isn’t the probability 0.1^10^10^10 or less? To some readers it will seem obvious—after all, if you made a bet you were this confident about every second until the end of the universe, surely there’s a reasonable chance you’d lose at least one of them, yet that would be that would be a mere 10^10^10 bets. Nevertheless, intuitions vary, and thus far we haven’t presented any particular argument that the red star is farther above than it is to the right. Unfortunately, there are several good arguments for that conclusion. For reasons of space, these are mostly in the appendix; for now, I’ll merely summarize one of the arguments.

Argument from hypothetical updates:

In a nutshell: 0.1^10^10^10 is such a small probability that no amount of evidence could make up for it; you’d continue to disbelieve no matter what happened. But that’s unreasonable:

Suppose that, right after the Mugger asks you for money, a giant hole appears in the sky and an alien emerges from it, yelling “You can’t run forever!” The Mugger then makes some sort of symbol with his hands, and a portal appears right next to you both. He grabs you by the wrist and pulls you with him as he jumps through the portal… Over the course of the next few days, you go on to have many adventures together as he seeks to save 10^10^10^10 lives, precisely as promised. In this hypothetical scenario, would you eventually come to believe that the Mugger was telling the truth? Or would you think that you’re probably hallucinating the whole thing?

It can be shown that if you decide after having this adventure that the Mugger-was-right hypothesis is more likely than hallucination, then before having the adventure you must have thought the Mugger-was-right hypothesis was significantly more likely than 0.1^10^10^10. (See appendix 8.1.1, to come) In other words, if you really think that the probability of the Mugger-was-right outcome is small enough, you would continue disbelieving the Mugger even if he takes you on an interdimensional adventure that seemingly verifies all his claims. Since you wouldn’t, you don’t.

Here is our problem: Long-termists admit that the probability of achieving the desired outcome (saving the world, etc.) by donating to them is small. Yet, since the utility is higher than the probability is small, the expected utility of donating to them is very high, higher than the expected utility of donating to other charitable causes like AMF. So, they argue, you should donate to them. The Mugger seems to be saying something very similar; the only difference seems to be that the probabilities are even smaller and the utilities are even larger. In fact, the utilities are so much larger that the expected utility of giving to the Mugger is much higher than the expected utility of giving to the long-termist! Insofar as we ought to maximize expected utility, it seems we should give money to the Mugger over the long-termist.

So (the initial worry goes) since the Mugger’s argument is obviously unacceptable, we should reject the long-termists’ argument as well: Expected utility arguments are untrustworthy. Barring additional reasons to give to the long-termist, we should save our money for other causes (like AMF) instead.

Next post in this series: Defusing the Initial Worry and Steelmanning the Problem


Notes:

1: For convenience throughout this article I will omit parentheses when chaining exponentials together, i.e. by 10^10^10^10 I mean 10^(10^(10^10)).

2: They’re called Pascal’s Mugger in homage to Pascal’s Wager. that the mugger is not waiting to see what probability you assign to his truthfulness and then picking N to beat that. This would not work, since the probability depends on what N is. The mugger is simply picking an extremely high N and then counting on your intellectual honesty to do the rest: do you really assign probability of less than 0.1^10^10^10, or are you just rationalizing why you aren’t going to give the money? More on this later. Note

3: Long-termism is roughly the idea that when choosing which causes to prioritize, we should be thinking about the world as a whole, not just as it is now, but as it will be far into the future, and doing what we think will lead to the best results overall. Because the future is so much bigger than the present, the long-term effects of our decisions end up mattering quite a lot in long-termist calculations. Common long-termist priorities are e.g. preventing human extinction, steering society away from harmful equilibria, and attempting to influence societal values in a long-lasting way. See here more. Note that not all long-termist projects involve tiny probabilities of success; it’s just that so far many of them do. Making a big-picture difference is hard. for

4: 10^40 lives saved by preventing human extinction is a number I made up, but it’s a conservative estimate; see https://​​nickbostrom.com/​​astronomical/​​waste.html Note that this assumes we don’t take a person-affecting view in population ethics. If we do, then they aren’t really “lives saved,” but rather “lives created,” and thus don’t count..

5: The author would like to thank Amanda Askell, Max Dalton, Yoaav Isaacs, Miriam Johnson, Matt Kotzen, Ramana Kumar, Justis Mills, and Stefan Schubert for helpful discussion and comments.

6: “Utility” is meant to be a non-loaded term. It is just a number representing how good the outcome is.

7: To see this, think of how high above the top of the chart the red star is. 10^10^10^10 lives saved is 10^10^10 centimeters or so above the top of the chart; that is, 10^10,000,000,000 centimeters. A billion light years, meanwhile, is ~10^27 centimeters.

8: This argument originated with Yudkowsky, though I’ve modified it slightly: http://​​lesswrong.com/​​lw/​​h8m/​​being_halfrational_about_pascals_wager_is_even/​​

9: This article only considers whether Pascal’s Mugging (and the more general problem it is an instance of) undermines the most prominent arguments given for causes like existential risk reduction, steering the future, etc.: expected utility calculations that multiply a large utility by a small probability. (This article will sometimes call these “long-termist arguments” for short.) However, there are other arguments for these causes, and also other arguments against. For example, take x-risk reduction. Here are some (sketches of) other arguments in favor: (A) We owe it to the people who fought for freedom and justice in the past, to ensure that their vision is eventually realized, and it’s not going to be realized anytime soon. (B) We are very ignorant right now, so we should focus on overcoming that—and that means helping our descendents be smarter and wiser than us and giving them more time to think about things. (C) Analogy: Just as an individual should make considerable sacrifices to avoid a 1% risk of death in childhood, so too should humanity make considerable sacrifices to avoid a 1% risk of extinction in the next century. And here are some arguments against: (A) We have a special moral obligation to help people that we’ve directly harmed, and those people are not in the future, (B) Morality is about obeying your conscience, not maximizing expected utility, and our consciences don’t tell us to prevent x-risk. I am not endorsing any of these arguments here, just giving examples of additional arguments one might want to consider when forming an all-things-considered view.