[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just donât know of a source.]
Prompted by Holdenâs discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefitsâat least for some conceptions of âbenefitââdiverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume weâre only considering fixed population sizes, so thereâs no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you donât know which slice of experience in each world you would be. To make things easy enough to grasp, take a âsliceâ to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Letâs say one second.
These worlds might entail probabilities of experiences as well. So, since itâs hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is âre-rolledâ a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holdenâs example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiencesâno happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-daysâsuspend your disbelief and imagine they never get boredâfollowed by a beach-day that ends in depression. Then I imagine I donât know which moment of experience in either of these options Iâll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They wonât be comforted by the fact that theyâre rare, or that theyâre in the context of a âpersonâ who otherwise is quite happy. Theyâll just suffer.
Iâm not saying the probabilities donât matter. Of course they do; Iâd rather take #2 than a third option where thereâs a 1 in 100 thousand chance of depression. Iâm also pretty uncertain where I stand when we modify #1 so that the personâs life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete âpersonsâ donât get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of âexperience slicesâ definitely pushes my intuitions in the same direction.
One question I like to think about is whether Iâd choose to gain either (a) a neutral experience or (b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but Iâd almost certainly not take option (b). Iâd guess thereâs likely risk aversion intuition also being snuck here too though.
A Parfitian Veil of Ignorance
[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just donât know of a source.]
Prompted by Holdenâs discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefitsâat least for some conceptions of âbenefitââdiverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume weâre only considering fixed population sizes, so thereâs no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you donât know which slice of experience in each world you would be. To make things easy enough to grasp, take a âsliceâ to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Letâs say one second.
These worlds might entail probabilities of experiences as well. So, since itâs hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is âre-rolledâ a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holdenâs example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiencesâno happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-daysâsuspend your disbelief and imagine they never get boredâfollowed by a beach-day that ends in depression. Then I imagine I donât know which moment of experience in either of these options Iâll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They wonât be comforted by the fact that theyâre rare, or that theyâre in the context of a âpersonâ who otherwise is quite happy. Theyâll just suffer.
Iâm not saying the probabilities donât matter. Of course they do; Iâd rather take #2 than a third option where thereâs a 1 in 100 thousand chance of depression. Iâm also pretty uncertain where I stand when we modify #1 so that the personâs life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete âpersonsâ donât get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of âexperience slicesâ definitely pushes my intuitions in the same direction.
One question I like to think about is whether Iâd choose to gain either
(a) a neutral experience
or
(b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but Iâd almost certainly not take option (b). Iâd guess thereâs likely risk aversion intuition also being snuck here too though.