Biting the Bullet on Pascal’s Mugging

Introduction

Consider this variant of the Pascal’s mugging thought experiment. Suppose you are given a choice between two different options. One option is to save one person’s life. The other is to have a one in a quadrillion chance of saving a quadrillion and one people’s lives. Assume that everyone’s lives are worth living and equally good and that each of these people live completely independently from everyone else so there are no side effects of anyone dying beyond that person losing their good life. Then, according to an expected value maximisation theory of morality, you ought to go for a one in a quadrillion chance of saving a quadrillion and one lives since, in expectation, you would save lives. Since such a decision would generally be considered extremely counter-intuitive, many people argue that Pascal’s mugging demonstrates a flaw in such a theory and that we shouldn’t merely maximise expected moral value in cases where we have an extremely small probability of obtaining an extremely large reward. Here, I will use a thought experiment to argue that, despite being counter-intuitive, it is right to choose the option of having a one in a quadrillion chance of saving a quadrillion and one lives.

A Universal God

Suppose now that you are a god who is faced with making a choice like the one described above but on many independent planets. By independent, I mean that nothing which can be affected by one planet can be affected by any other. Specifically, there are identical worlds in your dominion and you must choose an integer, , between and Then, worlds will be randomly selected and on each of them, you will independently have a probability of saving lives, where so that, in expectation, you will always save more than one life. On the remaining worlds, one life will be saved. What value of should you choose (i.e. on how many worlds should you attempt to save lives)?

Consider first the two “extreme” options: setting , thereby opting to save one life per world and lives in total, and setting , thereby attempting to save lives on every world. Of course, taking will save more lives in expectation. Moreover, we can show that, unlike in the case with , where it was highly unlikely that attempting to save lives would result in a better outcome than just saving one life, it is virtually certain that setting will result in more lives being saved than would have been had we chosen , provided is sufficiently large. To see this, imagine we take and let denote the number of planets on which lives are saved so that the total number of lives saved will be . Now, using the weak law of large numbers, we note that, as ,

and so

In other words, if we imagine this scenario with a sufficiently large number of worlds, taking not only saves more lives on average than taking , it is almost guaranteed that it will in fact save more than lives. Hence, in this case, we are no longer faced with the objection that taking only has a very small probability of providing more moral value than taking . Based on this, it would seem to me to be very hard to justify taking and that there must therefore be a value, , between and which would be a better choice of than . From now on, we will always assume that is large enough for this conclusion to hold.

A Local God

Now, imagine instead that rather than one god making this choice for all worlds, there are instead many “local” gods, one for each independent world. Each god must make the decision to either save one life on their world or have a probability of saving lives on their world. Suppose you are one such god. You know that of the other gods are going to attempt to save lives, while the remaining gods will opt to save one life each. Assuming you believe the people of your world have the same moral value as the people on other worlds, it is now as though you are in the position of the universal god trying to decide whether it would be better to choose or . However, we have already argued that it would have been better for that universal god to choose to be than . Therefore, you ought to try to save lives on your planet and risk saving no one.

Let us return now to the original dilemma posed in the introduction. In this scenario, you are like a local god but you have no knowledge of whether or not there exist alien worlds with other local gods faced with the same decision. We have argued that if you knew that there were another independent worlds in which someone was faced with the same decision and that on exactly of them, they would opt to have the small probability of saving a large number of lives on their world, then you should do the same. However, it seems absurd that the moral thing for you to do should depend on your knowledge of independent worlds since, by definition, nothing which you can affect through your decision could be affected by the existence of these planets. Therefore, you ought to choose to have a one in a quadrillion probability of saving a quadrillion and one lives, regardless of whether or not these other worlds exist.

What are Your Thoughts?

I would be grateful for any feedback and criticisms of the argument I have presented here. Of course, I have not mentioned versions of Pascal’s mugging in which there is an opportunity to gain infinite reward and I don’t think this thought experiment shines any light on how to deal with lotteries involving different infinities. I apologise if I have accidentally plagiarised someone else’s thought experiment.

No comments.