RSS

Pas­cal’s mugging

TagLast edit: 26 Oct 2022 22:05 UTC by Pablo

Pascal’s mugging is a thought experiment intended to raise a problem for expected value theory. Unlike Pascal’s wager, Pascal’s mugging does not involve infinite utilities or probabilities, so the problem it raises is separate from any of the known paradoxes of infinity.

The thought experiment

The thought experiment and its name first appeared in a blog post by Eliezer Yudkowsky.[1] Nick Bostrom later elaborated it in the form of a fictional dialogue.[2]

In Yudkowsky’s original formulation, a person is approached by a mugger who threatens to kill an astronomical number of people unless the person agrees to give them five dollars. Even a tiny probability assigned to the hypothesis that the mugger will deliver on his promise seems sufficient to make the prospect of giving the mugger five dollars better than the alternative, in expectation. The minuscule chance that the mugger is willing and able to save astronomically many people is more than compensated by the enormous value of what is at stake. (If one thinks the probability too low, the number of lives the mugger threatens to kill could be arbitrarily increased.) The thought experiment supposedly raises a problem for expected value theory because it seems intuitively absurd that we should give money to the mugger, yet this is what the theory apparently implies.

Responses

A variety of responses have been developed. One common response is to revise or reject expected value theory. A frequent revision is to ignore scenarios whose probability is below a certain threshold.

This response, however, has a number of problems. One problem is that the threshold seems arbitrary, regardless of where it is set. A critic could always say: “Why do you set the threshold at that value, rather than e.g. one order of magnitude higher or lower?” A more fundamental problem is that it seems that whether a scenario falls below or above a certain threshold is contingent on how the space of possibilities is carved up. For example, an existential risk of 1-in-100 per century can be redescribed as an existential risk of 1-in-5.2 billion per minute. If the threshold is set to a value between those two numbers, whether one should or should not ignore the risk will depend merely on how one describes it.

Another response is to adopt a prior that penalizes hypotheses in proportion to the number of people they imply we can affect. That is, one could adopt a view in which there is roughly a 1 in 10n chance that someone will have the power to affect 10n people. Given this penalty, the mugger can no longer resort to the expedient of increasing the number of people they threaten to kill in order to make the offer sufficiently attractive. As the number of people increases, the probability that they will be killed by the mugger decreases commensurately, and the expected value of their successive proposals remains the same.

A final response is to just “bite the bullet” and accept that if the mugger’s proposal is better in expectation, one should indeed give them the five dollars. This approach becomes more plausible when combined with a debunking explanation of the intuition that paying the mugger would be absurd. For example, one can argue that human brains cannot adequately represent very large or very small numbers, and that therefore intuitions triggered by thought experiments making use of such quantities are unreliable and should not be given much evidential weight.

Implications

Regardless of how one responds to Pascal’s mugging, it is important to note that it does not appear to affect the value assigned to “high-stakes” causes or interventions prioritized within the effective altruism community, such as AI safety research or other forms of existential risk mitigation. The case for working on these causes is not fundamentally different from more mundane arguments which do not plausibly fall under the scope of Pascal’s mugging, such as voting in an election.[3][4]

It is also worth stressing that Pascal’s mugging involves both very high stakes and very small probabilities, but the term is sometimes incorrectly applied to cases involving high stakes, regardless of their probability.[5]

Further reading

Bostrom, Nick (2009) Pascal’s mugging, Analysis, vol. 69, pp. 443–445.

Yudkowsky, Eliezer (2007) Pascal’s mugging: tiny probabilities of vast utilities, LessWrong, October 19.

Related entries

alternatives to expected value theory | altruistic wager | decision theory | fanaticism | risk aversion

  1. ^

    Yudkowsky, Eliezer (2007) Pascal’s mugging: tiny probabilities of vast utilities, LessWrong, October 19.

  2. ^

    Bostrom, Nick (2009) Pascal’s mugging, Analysis, vol. 69, pp. 443–445.

  3. ^
  4. ^

    Wiblin, Robert (2015) Saying ‘AI safety research is a Pascal’s Mugging’ isn’t a strong response, Effective Altruism Forum, December 15.

  5. ^

    For discussion of a parallel misapplication of Pascal’s wager, see Yudkowsky, Eliezer (2009) The Pascal’s wager fallacy fallacy, Overcoming Bias, March 17.

Most* small prob­a­bil­ities aren’t pas­calian

Gregory Lewis🔸7 Aug 2022 16:17 UTC
212 points
20 comments6 min readEA link

Cos­mic’s Mug­ger : Should we re­ally de­lay cos­mic ex­pan­sion ?

Lysandre Terrisse30 Jun 2022 6:41 UTC
10 points
1 comment4 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Con­clud­ing Arguments

kokotajlod15 Nov 2018 21:47 UTC
33 points
6 comments10 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: Bibliog­ra­phy and Appendix

kokotajlod20 Nov 2018 17:34 UTC
10 points
0 comments24 min readEA link

Pas­cal’s Mugging

EA Handbook1 Jul 2009 7:00 UTC
9 points
0 comments4 min readEA link
(www.nickbostrom.com)

[Question] Ar­tifi­cial Suffer­ing and Pas­cal’s Mug­ging: What to think?

Babel4 Oct 2021 15:01 UTC
15 points
4 comments2 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: De­fus­ing the Ini­tial Worry and Steel­man­ning the Problem

kokotajlod10 Nov 2018 9:12 UTC
35 points
10 comments8 min readEA link

Tiny Prob­a­bil­ities of Vast Utilities: A Prob­lem for Long-Ter­mism?

kokotajlod8 Nov 2018 10:09 UTC
30 points
18 comments7 min readEA link

Say­ing ‘AI safety re­search is a Pas­cal’s Mug­ging’ isn’t a strong response

Robert_Wiblin15 Dec 2015 13:48 UTC
15 points
16 comments2 min readEA link

[Question] Pas­cal’s Mug­ging and aban­don­ing credences

AndreaSR9 Jul 2021 10:18 UTC
6 points
9 comments1 min readEA link

Re­duc­ing the neart­erm risk of hu­man ex­tinc­tion is not as­tro­nom­i­cally cost-effec­tive?

Vasco Grilo🔸9 Jun 2024 8:02 UTC
20 points
37 comments8 min readEA link

What is so wrong with the “dog­matic” solu­tion to reck­less­ness?

tobycrisford 🔸11 Feb 2023 18:29 UTC
25 points
31 comments7 min readEA link

My notes on: Why we can’t take ex­pected value es­ti­mates liter­ally (even when they’re un­bi­ased)

Vasco Grilo🔸18 Apr 2022 13:10 UTC
15 points
0 comments3 min readEA link

EV Max­i­miza­tion for Humans

Sharmake3 Sep 2022 23:44 UTC
12 points
0 comments4 min readEA link

Should strong longter­mists re­ally want to min­i­mize ex­is­ten­tial risk?

tobycrisford 🔸4 Dec 2022 16:56 UTC
38 points
9 comments4 min readEA link

The fu­ture of humanity

Dem0sthenes1 Sep 2022 22:34 UTC
1 point
0 comments8 min readEA link

Bit­ing the Bul­let on Pas­cal’s Mugging

Matthew Barber28 Aug 2022 15:30 UTC
5 points
0 comments3 min readEA link

Fa­nat­i­cal EAs should sup­port very weird projects

Derek Shiller30 Jun 2022 12:07 UTC
66 points
42 comments9 min readEA link

Does this solve Pas­cal’s Mug­gings?

Sanjay28 Aug 2022 14:44 UTC
15 points
26 comments5 min readEA link
No comments.