Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I’m doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don’t have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.
Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).
I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
I see. :) It seems like we’d still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren’t signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.
(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having “reasonable” beliefs about the world.)
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.
Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system (“you”) applied to everyone, not using their own subjective probabilities.
Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I’m doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don’t have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.
Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).
I see. :) It seems like we’d still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren’t signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.
(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having “reasonable” beliefs about the world.)
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
Interesting. :)
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.