Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there’s no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 − 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn’t bother because other uses of the money would be more cost-effective, even though it’s basically guaranteed that this person’s life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn’t work out, that’s the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn’t value anything at values ≥ 3^^^^3 (or ≤ −3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, it’s possible to blend ex ante and ex post views. An individual’s actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
Interesting. :)
I was thinking that it’s not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.