Consider this variant of the Pascal’s mugging thought experiment. Suppose you are given a choice between two different options. One option is to save one person’s life. The other is to have a one in a quadrillion chance of saving a quadrillion and one people’s lives. Assume that everyone’s lives are worth living and equally good and that each of these people live completely independently from everyone else so there are no side effects of anyone dying beyond that person losing their good life. Then, according to an expected value maximisation theory of morality, you ought to go for a one in a quadrillion chance of saving a quadrillion and one lives since, in expectation, you would save 10000000000000011000000000000000>1 lives. Since such a decision would generally be considered extremely counter-intuitive, many people argue that Pascal’s mugging demonstrates a flaw in such a theory and that we shouldn’t merely maximise expected moral value in cases where we have an extremely small probability of obtaining an extremely large reward. Here, I will use a thought experiment to argue that, despite being counter-intuitive, it is right to choose the option of having a one in a quadrillion chance of saving a quadrillion and one lives.
A Universal God
Suppose now that you are a god who is faced with making a choice like the one described above but on many independent planets. By independent, I mean that nothing which can be affected by one planet can be affected by any other. Specifically, there are W identical worlds in your dominion and you must choose an integer, w, between 0 and W. Then, w worlds will be randomly selected and on each of them, you will independently have a probability q of saving n lives, where nq>1 so that, in expectation, you will always save more than one life. On the remaining W−w worlds, one life will be saved. What value of w should you choose (i.e. on how many worlds should you attempt to save n lives)?
Consider first the two “extreme” options: setting w=0, thereby opting to save one life per world and W lives in total, and setting w=W, thereby attempting to save n lives on every world. Of course, taking w=W will save more lives in expectation. Moreover, we can show that, unlike in the case with W=1, where it was highly unlikely that attempting to save n lives would result in a better outcome than just saving one life, it is virtually certain that setting w=W will result in more lives being saved than would have been had we chosen w=0, provided W is sufficiently large. To see this, imagine we take w=W and let X denote the number of planets on which n lives are saved so that the total number of lives saved will be nX. Now, using the weak law of large numbers, we note that, as W→∞,
P(|X−qW|<W(nq−1)n)→1
and so
P(nX>W)→1.
In other words, if we imagine this scenario with a sufficiently large number of worlds, taking w=W not only saves more lives on average than taking w=0, it is almost guaranteed that it will in fact save more than W lives. Hence, in this case, we are no longer faced with the objection that taking w=W only has a very small probability of providing more moral value than taking w=0. Based on this, it would seem to me to be very hard to justify taking w=0 and that there must therefore be a value, ¯w, between 1 and W which would be a better choice of w than ¯w−1. From now on, we will always assume that W is large enough for this conclusion to hold.
A Local God
Now, imagine instead that rather than one god making this choice for all W worlds, there are instead many “local” gods, one for each independent world. Each god must make the decision to either save one life on their world or have a probability q of saving n lives on their world. Suppose you are one such god. You know that ¯w−1 of the other gods are going to attempt to save n lives, while the remaining W−¯w gods will opt to save one life each. Assuming you believe the people of your world have the same moral value as the people on other worlds, it is now as though you are in the position of the universal god trying to decide whether it would be better to choose w=¯w−1 or w=¯w. However, we have already argued that it would have been better for that universal god to choose w to be ¯w than ¯w−1. Therefore, you ought to try to save n lives on your planet and risk saving no one.
Let us return now to the original dilemma posed in the introduction. In this scenario, you are like a local god but you have no knowledge of whether or not there exist alien worlds with other local gods faced with the same decision. We have argued that if you knew that there were another W−1 independent worlds in which someone was faced with the same decision and that on exactly ¯w−1 of them, they would opt to have the small probability of saving a large number of lives on their world, then you should do the same. However, it seems absurd that the moral thing for you to do should depend on your knowledge of independent worlds since, by definition, nothing which you can affect through your decision could be affected by the existence of these planets. Therefore, you ought to choose to have a one in a quadrillion probability of saving a quadrillion and one lives, regardless of whether or not these other worlds exist.
What are Your Thoughts?
I would be grateful for any feedback and criticisms of the argument I have presented here. Of course, I have not mentioned versions of Pascal’s mugging in which there is an opportunity to gain infinite reward and I don’t think this thought experiment shines any light on how to deal with lotteries involving different infinities. I apologise if I have accidentally plagiarised someone else’s thought experiment.
Biting the Bullet on Pascal’s Mugging
Introduction
Consider this variant of the Pascal’s mugging thought experiment. Suppose you are given a choice between two different options. One option is to save one person’s life. The other is to have a one in a quadrillion chance of saving a quadrillion and one people’s lives. Assume that everyone’s lives are worth living and equally good and that each of these people live completely independently from everyone else so there are no side effects of anyone dying beyond that person losing their good life. Then, according to an expected value maximisation theory of morality, you ought to go for a one in a quadrillion chance of saving a quadrillion and one lives since, in expectation, you would save 10000000000000011000000000000000>1 lives. Since such a decision would generally be considered extremely counter-intuitive, many people argue that Pascal’s mugging demonstrates a flaw in such a theory and that we shouldn’t merely maximise expected moral value in cases where we have an extremely small probability of obtaining an extremely large reward. Here, I will use a thought experiment to argue that, despite being counter-intuitive, it is right to choose the option of having a one in a quadrillion chance of saving a quadrillion and one lives.
A Universal God
Suppose now that you are a god who is faced with making a choice like the one described above but on many independent planets. By independent, I mean that nothing which can be affected by one planet can be affected by any other. Specifically, there are W identical worlds in your dominion and you must choose an integer, w, between 0 and W. Then, w worlds will be randomly selected and on each of them, you will independently have a probability q of saving n lives, where nq>1 so that, in expectation, you will always save more than one life. On the remaining W−w worlds, one life will be saved. What value of w should you choose (i.e. on how many worlds should you attempt to save n lives)?
Consider first the two “extreme” options: setting w=0, thereby opting to save one life per world and W lives in total, and setting w=W, thereby attempting to save n lives on every world. Of course, taking w=W will save more lives in expectation. Moreover, we can show that, unlike in the case with W=1, where it was highly unlikely that attempting to save n lives would result in a better outcome than just saving one life, it is virtually certain that setting w=W will result in more lives being saved than would have been had we chosen w=0, provided W is sufficiently large. To see this, imagine we take w=W and let X denote the number of planets on which n lives are saved so that the total number of lives saved will be nX. Now, using the weak law of large numbers, we note that, as W→∞,
P(|X−qW|<W(nq−1)n)→1and so
P(nX>W)→1.In other words, if we imagine this scenario with a sufficiently large number of worlds, taking w=W not only saves more lives on average than taking w=0, it is almost guaranteed that it will in fact save more than W lives. Hence, in this case, we are no longer faced with the objection that taking w=W only has a very small probability of providing more moral value than taking w=0. Based on this, it would seem to me to be very hard to justify taking w=0 and that there must therefore be a value, ¯w, between 1 and W which would be a better choice of w than ¯w−1. From now on, we will always assume that W is large enough for this conclusion to hold.
A Local God
Now, imagine instead that rather than one god making this choice for all W worlds, there are instead many “local” gods, one for each independent world. Each god must make the decision to either save one life on their world or have a probability q of saving n lives on their world. Suppose you are one such god. You know that ¯w−1 of the other gods are going to attempt to save n lives, while the remaining W−¯w gods will opt to save one life each. Assuming you believe the people of your world have the same moral value as the people on other worlds, it is now as though you are in the position of the universal god trying to decide whether it would be better to choose w=¯w−1 or w=¯w. However, we have already argued that it would have been better for that universal god to choose w to be ¯w than ¯w−1. Therefore, you ought to try to save n lives on your planet and risk saving no one.
Let us return now to the original dilemma posed in the introduction. In this scenario, you are like a local god but you have no knowledge of whether or not there exist alien worlds with other local gods faced with the same decision. We have argued that if you knew that there were another W−1 independent worlds in which someone was faced with the same decision and that on exactly ¯w−1 of them, they would opt to have the small probability of saving a large number of lives on their world, then you should do the same. However, it seems absurd that the moral thing for you to do should depend on your knowledge of independent worlds since, by definition, nothing which you can affect through your decision could be affected by the existence of these planets. Therefore, you ought to choose to have a one in a quadrillion probability of saving a quadrillion and one lives, regardless of whether or not these other worlds exist.
What are Your Thoughts?
I would be grateful for any feedback and criticisms of the argument I have presented here. Of course, I have not mentioned versions of Pascal’s mugging in which there is an opportunity to gain infinite reward and I don’t think this thought experiment shines any light on how to deal with lotteries involving different infinities. I apologise if I have accidentally plagiarised someone else’s thought experiment.