If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of −10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone’s probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)?
Yes, although it’s possible that a single individual even having a 100% possibility of pain might not outweigh the pleasure of the others, if the number of other individuals is large enough and the social welfare function is sufficiently continuous and “additive”, e.g. it takes the form S(V)=∑i:vi∈Vf(vi) for f:R→R strictly increasing everywhere.
What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one’s view about that). If there’s no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.
I intended for your own subjective probability distribution to be used, but what you say here leads to some more weird examples (besides collapsing to regular prioritarianism (possibly while aggregating actual utilities over each individual first before aggregating across them)):
I’ve played a board game where the player who gets to go first is the one who has the pointiest ears. The value of this outcome would be different if you knew ahead of time who this would be compared to if you didn’t. In particular, if there’s were morally significant tradeoff between utilities, then this rule could be better or worse than a more (subjectively) random choice, depending on whether the worse off players are expected to benefit more or less. Of course, a random selection could be better or worse than one whose actual outcome you know in advance for utilitarians, but there are some differences.
For ex ante prioritarianism, this is also the case before and after you would realize the outcome of the rolls of dice or coin flips; once you realize what the outcome of the random selection is, it’s no longer random, and the value of following through with it changes. In particular, if each person had the same wellbeing before the rolls of the dice and stood to gain or lose the same amount if they won (regardless of the selection process), then random selection would be optimal and better than any fixed selection with whose outcome you know in advance, but once you know the outcome of the random selection process, before you apply it, it reduces to using any particular rule whose outcome you know in advance.
One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do “I” stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.
Yes, I think it’s basically the same issue. If we can use something like spatiotemporal continuity (I am doubtful that this can be made precise and coherent enough in a way that’s very plausible), then we could start before a person is even conceived. Right before conception, the sperm cells and ova could be used to determine the identities of the potential future people. Before the sperm cell used in conception even exists, you could imagine two sperm cells with different physical (spatiotemporal) origins in different outcomes that happen to carry the same genetic information, and you might consider the outcomes in which one is used to have a different person than the outcomes in which the the other is. Of course, you might have to divide up these two groups of outcomes further still. For example, you wouldn’t want to treat identical twins as a single individual, even if they originated from some common group of cells.
your own subjective probability distribution to be used
Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so it’d be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/her, relative to helping an atheist who expects to die at age 75. That’s true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.
Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I’m doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don’t have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.
Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).
I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
I see. :) It seems like we’d still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren’t signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.
(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having “reasonable” beliefs about the world.)
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.
Yes, although it’s possible that a single individual even having a 100% possibility of pain might not outweigh the pleasure of the others, if the number of other individuals is large enough and the social welfare function is sufficiently continuous and “additive”, e.g. it takes the form S(V)=∑i:vi∈Vf(vi) for f:R→R strictly increasing everywhere.
I intended for your own subjective probability distribution to be used, but what you say here leads to some more weird examples (besides collapsing to regular prioritarianism (possibly while aggregating actual utilities over each individual first before aggregating across them)):
I’ve played a board game where the player who gets to go first is the one who has the pointiest ears. The value of this outcome would be different if you knew ahead of time who this would be compared to if you didn’t. In particular, if there’s were morally significant tradeoff between utilities, then this rule could be better or worse than a more (subjectively) random choice, depending on whether the worse off players are expected to benefit more or less. Of course, a random selection could be better or worse than one whose actual outcome you know in advance for utilitarians, but there are some differences.
For ex ante prioritarianism, this is also the case before and after you would realize the outcome of the rolls of dice or coin flips; once you realize what the outcome of the random selection is, it’s no longer random, and the value of following through with it changes. In particular, if each person had the same wellbeing before the rolls of the dice and stood to gain or lose the same amount if they won (regardless of the selection process), then random selection would be optimal and better than any fixed selection with whose outcome you know in advance, but once you know the outcome of the random selection process, before you apply it, it reduces to using any particular rule whose outcome you know in advance.
Yes, I think it’s basically the same issue. If we can use something like spatiotemporal continuity (I am doubtful that this can be made precise and coherent enough in a way that’s very plausible), then we could start before a person is even conceived. Right before conception, the sperm cells and ova could be used to determine the identities of the potential future people. Before the sperm cell used in conception even exists, you could imagine two sperm cells with different physical (spatiotemporal) origins in different outcomes that happen to carry the same genetic information, and you might consider the outcomes in which one is used to have a different person than the outcomes in which the the other is. Of course, you might have to divide up these two groups of outcomes further still. For example, you wouldn’t want to treat identical twins as a single individual, even if they originated from some common group of cells.
Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so it’d be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/her, relative to helping an atheist who expects to die at age 75. That’s true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.
Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I’m doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don’t have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.
Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).
I see. :) It seems like we’d still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren’t signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.
(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having “reasonable” beliefs about the world.)
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
Interesting. :)
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.