This is an interesting idea that sands off some of the unfortunate Pareto-suboptimal edges of prioritarianism. But it has some problems.
Ex-ante prioritarianism looks good in the example cases given where it gives an answer that disagrees with regular prioritarianism but agrees with utilitarianism. However, the cases where ex-ante prioritarianism disagrees with
For instance, consider an extension of your experiment:
Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).
Suppose there are two people, A and B, who are equally well off with utility 100. Suppose we have the choice between two options. In Lottery 1, A gets a benefit of 100 with certainty, while B gets nothing. In Lottery 2, either A gets 50 with probability 0.4; B gets 50 with probability 0.4, or no-one gets anything (probability 0.2).
Prioritarianism prefers Lottery 1 to Lottery 2, since one person having a welfare of 100 and the other a welfare of 200 is preferred to an 80% chance of (150, 100) and a 20% chance of (100, 100).
Utilitarianism of course prefers the outcome with expected utility 300 to the outcome with expected utility 240.
But a sufficiently concave ex-ante prioritarianism prefers Lottery 2 because B’s lower expected value in Lottery 1 is weighted more highly.
It seems perverse to prefer an outcome which is with certainty worse both on utilitarian and prioritarian grounds just to give B a chance to be the one who is on top.
I won’t say I’m convinced by my own responses here, but I’ll offer them anyway.
I think B could reasonably claim that Lottery 1 is less fair to them than Lottery 2, while A could not claim that Lottery 2 is less fair to them than Lottery 1 (it benefits them less in expectation, but this is not a matter of fairness). This seems a bit clearer with the understanding that von Neumann-Morgenstern rational agents maximize expected (ex ante) utility, so an individual’s ex ante utility could matter to that individual in itself, and an ex ante view respects this. (And I think the claim that ex post prioritarianism is Pareto-suboptimal may only be meaningful in the context of vNM-rational agents; the universe doesn’t give us a way to make tradeoffs between happiness and suffering (or other values) except through individual preferences. If we’re hedonistic consequentialists, then we can’t refer to preferences or the veil of ignorance to justify classical utilitarianism over hedonistic prioritarianism.)
Furthermore, if you would imagine repeating the same lottery with the same individuals and independent probabilities over and over, you’d find in the long run, either in Lottery 1, A would benefit by 100 on average and B would benefit by 0 on average, or with Lottery 2, A would benefit by 20 on average and B would benefit by 20 on average. On these grounds, a prioritarian could reasonably prefer Lottery 2 to Lottery 1. Of course, an ex post prioritarian would come to the same conclusion if they’re allowed to consider the whole sequence of independent lotteries and aggregate each individual’s own utilities within each individual before aggregating over individuals.
(On the other hand, if you repeat Lottery 1, but swap the positions of A and B each time, then Lottery 1 benefits A by 50 on average and B by 50 on average, and this is better than Lottery 2. The utilitairan, ex ante prioritarian and ex post prioritarian would all agree.)
A similar problem is illustrated in “Decide As You Would With Full Information! An Argument Against Ex Ante Pareto” by Marc Fleurbaey & Alex Voorhoeve (I read parts of this after I wrote the post). You can check Table 1 on p.6 and the surrounding discussion. I’m changing the numbers here. EDIT: I suppose the examples can be used to illustrate the same thing (except the utilitarian preference for Lottery 1): Ex post you prefer Lottery 1 and would realize you’d made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you’d also prefer Lottery 1 and want to change your mind.
Suppose there are two diseases, SEVERE and MILD. An individual with SEVERE will have utility 10, while an individual with MILD will have utility 100. If SEVERE is treated, it will instead have utility 20, a gain of 10. If MILD is treated, it will instead have utility 120, a gain of 20.
Now, suppose there are two individuals, A and B. One will have SEVERE, and the other will have MILD. You can treat either SEVERE or MILD, but not both. Which should you treat?
1. If you know who will have SEVERE with certainty, then with a sufficiently prioritarian view, you should treat SEVERE. To see why, suppose you know A has SEVERE. Then, by treating SEVERE, the utilities would be (20, 100) for A and B, respectively, but by treating MILD, they would be (10, 120). (20, 100) is better than (10, 120) if you’re sufficiently prioritarian. Symmetrically, if you know B has SEVERE, you get (100, 20) for treating SEVERE or (120, 10) for treating MILD, and again it’s better to treat SEVERE.
2. If you think each will have SEVERE or MILD with probability 0.5 each (and one will have SEVERE and the other, MILD), then you should treat MILD. This is because the expected utility if you treat MILD is (10+120)*0.5 = 65 for each individual, while the expected utility if you treat SEVERE is (20+100)*0.5 = 60 for each individual. Treating MILD is ex ante better than treating SEVERE for each of A and B. If neither of them knows who has which, they’d both want to you treat MILD.
What’s the difference from your point of view between 1 and 2? Extra information in 1. In 1, whether you find out that A will have SEVERE or B will have SEVERE, it’s better to treat SEVERE. So, no matter which you learn is the case in reality, it’s better to treat SEVERE. But if you don’t know, it’s better to treat MILD.
So, in your ignorance, you would treat MILD, but if you found out who had SEVERE and who had MILD, no matter which way it goes, you’d realize you had made a mistake. You also know that seeking out this information of who has which ahead of time, no matter which way it goes, will cause you to change your mind about which disease to treat. EDIT: I suppose both of these statements are true of your example. Ex post you prefer Lottery 1 and would realize you’d made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you’d also prefer Lottery 1.
This is an interesting idea that sands off some of the unfortunate Pareto-suboptimal edges of prioritarianism. But it has some problems.
Ex-ante prioritarianism looks good in the example cases given where it gives an answer that disagrees with regular prioritarianism but agrees with utilitarianism. However, the cases where ex-ante prioritarianism disagrees with
For instance, consider an extension of your experiment:
Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).
Suppose there are two people, A and B, who are equally well off with utility 100. Suppose we have the choice between two options. In Lottery 1, A gets a benefit of 100 with certainty, while B gets nothing. In Lottery 2, either A gets 50 with probability 0.4; B gets 50 with probability 0.4, or no-one gets anything (probability 0.2).
Prioritarianism prefers Lottery 1 to Lottery 2, since one person having a welfare of 100 and the other a welfare of 200 is preferred to an 80% chance of (150, 100) and a 20% chance of (100, 100).
Utilitarianism of course prefers the outcome with expected utility 300 to the outcome with expected utility 240.
But a sufficiently concave ex-ante prioritarianism prefers Lottery 2 because B’s lower expected value in Lottery 1 is weighted more highly.
It seems perverse to prefer an outcome which is with certainty worse both on utilitarian and prioritarian grounds just to give B a chance to be the one who is on top.
I won’t say I’m convinced by my own responses here, but I’ll offer them anyway.
I think B could reasonably claim that Lottery 1 is less fair to them than Lottery 2, while A could not claim that Lottery 2 is less fair to them than Lottery 1 (it benefits them less in expectation, but this is not a matter of fairness). This seems a bit clearer with the understanding that von Neumann-Morgenstern rational agents maximize expected (ex ante) utility, so an individual’s ex ante utility could matter to that individual in itself, and an ex ante view respects this. (And I think the claim that ex post prioritarianism is Pareto-suboptimal may only be meaningful in the context of vNM-rational agents; the universe doesn’t give us a way to make tradeoffs between happiness and suffering (or other values) except through individual preferences. If we’re hedonistic consequentialists, then we can’t refer to preferences or the veil of ignorance to justify classical utilitarianism over hedonistic prioritarianism.)
Furthermore, if you would imagine repeating the same lottery with the same individuals and independent probabilities over and over, you’d find in the long run, either in Lottery 1, A would benefit by 100 on average and B would benefit by 0 on average, or with Lottery 2, A would benefit by 20 on average and B would benefit by 20 on average. On these grounds, a prioritarian could reasonably prefer Lottery 2 to Lottery 1. Of course, an ex post prioritarian would come to the same conclusion if they’re allowed to consider the whole sequence of independent lotteries and aggregate each individual’s own utilities within each individual before aggregating over individuals.
(On the other hand, if you repeat Lottery 1, but swap the positions of A and B each time, then Lottery 1 benefits A by 50 on average and B by 50 on average, and this is better than Lottery 2. The utilitairan, ex ante prioritarian and ex post prioritarian would all agree.)
A similar problem is illustrated in “Decide As You Would With Full Information! An Argument Against Ex Ante Pareto” by Marc Fleurbaey & Alex Voorhoeve (I read parts of this after I wrote the post). You can check Table 1 on p.6 and the surrounding discussion. I’m changing the numbers here. EDIT: I suppose the examples can be used to illustrate the same thing (except the utilitarian preference for Lottery 1): Ex post you prefer Lottery 1 and would realize you’d made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you’d also prefer Lottery 1 and want to change your mind.
So, in your ignorance, you would treat MILD, but if you found out who had SEVERE and who had MILD, no matter which way it goes, you’d realize you had made a mistake. You also know that seeking out this information of who has which ahead of time, no matter which way it goes, will cause you to change your mind about which disease to treat. EDIT: I suppose both of these statements are true of your example. Ex post you prefer Lottery 1 and would realize you’d made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you’d also prefer Lottery 1.