[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just don’t know of a source.]
Prompted by Holden’s discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefits—at least for some conceptions of “benefit”—diverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume we’re only considering fixed population sizes, so there’s no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you don’t know which slice of experience in each world you would be. To make things easy enough to grasp, take a “slice” to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Let’s say one second.
These worlds might entail probabilities of experiences as well. So, since it’s hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is “re-rolled” a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holden’s example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiences—no happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-days—suspend your disbelief and imagine they never get bored—followed by a beach-day that ends in depression. Then I imagine I don’t know which moment of experience in either of these options I’ll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They won’t be comforted by the fact that they’re rare, or that they’re in the context of a “person” who otherwise is quite happy. They’ll just suffer.
I’m not saying the probabilities don’t matter. Of course they do; I’d rather take #2 than a third option where there’s a 1 in 100 thousand chance of depression. I’m also pretty uncertain where I stand when we modify #1 so that the person’s life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete “persons” don’t get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of “experience slices” definitely pushes my intuitions in the same direction.
One question I like to think about is whether I’d choose to gain either (a) a neutral experience or (b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but I’d almost certainly not take option (b). I’d guess there’s likely risk aversion intuition also being snuck here too though.
A Parfitian Veil of Ignorance
[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just don’t know of a source.]
Prompted by Holden’s discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefits—at least for some conceptions of “benefit”—diverge from those in his post, when considering this version.
The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume we’re only considering fixed population sizes, so there’s no worry that this exercise sneaks in average utilitarianism, etc.)
But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.
Imagine that when evaluating two possible worlds, you don’t know which slice of experience in each world you would be. To make things easy enough to grasp, take a “slice” to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Let’s say one second.
These worlds might entail probabilities of experiences as well. So, since it’s hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is “re-rolled” a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holden’s example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.
So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiences—no happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).
I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-days—suspend your disbelief and imagine they never get bored—followed by a beach-day that ends in depression. Then I imagine I don’t know which moment of experience in either of these options I’ll be.
Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They won’t be comforted by the fact that they’re rare, or that they’re in the context of a “person” who otherwise is quite happy. They’ll just suffer.
I’m not saying the probabilities don’t matter. Of course they do; I’d rather take #2 than a third option where there’s a 1 in 100 thousand chance of depression. I’m also pretty uncertain where I stand when we modify #1 so that the person’s life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete “persons” don’t get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)
I find the framing of “experience slices” definitely pushes my intuitions in the same direction.
One question I like to think about is whether I’d choose to gain either
(a) a neutral experience
or
(b) flipping a coin and reliving all the positive experience slices of my life if heads, and reliving all the negative ones if tails
My life feels highly net positive but I’d almost certainly not take option (b). I’d guess there’s likely risk aversion intuition also being snuck here too though.