Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests
Introduction
It seems that âex anteâ views (like ex ante prioritarianism) havenât been discussed much within the EA community. Basically, the approach is to aggregate the utility in each individual first, over their life and by taking the expectation, and then apply whatever social welfare function you like to the resulting individually aggregated utilities.
Furthermore, you could take these individual aggregations/âexpectations conditional on existence (past, current or future), and only include the terms for actual (past, current or future) individuals; so the set of individuals to aggregate over would be a random variable. Youâd then take another expectation, this time of of the social welfare function applied to these aggregated utilities over the set of existing individuals.
The main benefit here is to avoid objections of overriding individual interests while still being prioritarian or negative-leaning, since we can treat personal and interpersonal tradeoffs differently.
Math formalism
We define to be the aggregated utility of individual over all time (or just the future), in a given determined outcome (no expectations applied yet); in the outcomes in which they havenât existed and wonât exist, is left undefined. Then we define
and we apply our social welfare function to the set
E.g., for some function which is increasing (or non-decreasing) and concave. Some examples here. Total utilitarianism has for all , and the ex ante view applied to it actually makes no difference. A fairly strong form of negative utilitarianism could be defined by for all , i.e. if and , otherwise; this means that as long as an individual is expected to have a good life (net positive value), what happens to them doesnât matter, or could be lexically dominated by concerns for those expected to have negative lives (i.e. only if we canât improve any negative lives, can we look to improving positive ones).
Finally, we rank decisions based on the expectation of over :
Consequences
We can be both prioritarian or negative-leaning and avoid overriding individual interests; we donât give greater weight to the bad over the good in any individualâs life, but we give greater weight to bad lives over good lives. Personal and interpersonal tradeoffs would be treated differently. You would be permitted, under an ex ante prioritarian or negative-leaning view, to choose great suffering together with great bliss or risk great suffering for great bliss, but you canât impose great suffering on one person to give great bliss to another (depending on the exact form of the social welfare function).
Letâs look at an illustrative example where the ex ante view disagrees with the usual one, taken from âPrioritarianism and the Separateness of Personsâ by Michael Otsuka (2012):
Two-person case with risk and inversely correlated outcomes: There are two people, each of whom you know will develop either the very severe or the slight impairment and each of whom has an equal chance of developing either impairment. You also know that their risks are inversely correlated: i.e., whenever one of them would suffer the very severe impairment, then the other would suffer the slight impairment. You can either supply both with a treatment that will surely improve a recipientâs situation if and only if he turns out to suffer the very severe impairment or supply both with a treatment that will surely improve a recipientâs situation if and only if he turns out to suffer the slight impairment. An effective treatment for the slight impairment would provide a somewhat greater increase in utility than would an effective treatment for the very severe impairment.
An ex ante prioritarian would choose to treat the slight impairment, while the usual prioritarian who does not first aggregate or take expectations over the individual would choose to treat the very severe impairment. From the point of view of each individual, treating the slight impairment would be preferable.
For what itâs worth, under empty individualism (the view that one physical person over time should really be treated as a sequence of distinct individuals from moment to moment, person-moments), applying this ex ante modification actually doesnât make any difference. Itâll look like weâre overriding preferences, but under empty individualism, there are only interpersonal tradeoffs, no personal tradeoffs. See also.
References and other reading
âPrioritarianism and the Separateness of Personsâ by Michael Otsuka (2012) describes this approach, gives examples and raises some objections to it.
That issue of Utilitas is focused on prioritarianism, with a paper by Parfit which also discusses ex ante views (I have yet to read it).
Toby Ordâs objections to prioritarianism and negative utilitarianism which do not apply to the ex ante view:
- 's comment on Harsanyiâs simÂple âproofâ of utilitarianism by (20 Feb 2020 17:15 UTC; 32 points)
- 's comment on Teruji Thomas, âThe AsymÂmeÂtry, UncerÂtainty, and the Long Termâ by (6 Nov 2019 3:40 UTC; 7 points)
- 's comment on Harsanyiâs simÂple âproofâ of utilitarianism by (20 Feb 2020 17:23 UTC; 1 point)
- 's comment on GeoÂmetÂric RaÂtionÂalÂity is Not VNM Rational by (LessWrong; 21 Feb 2023 18:25 UTC; 1 point)
This is an interesting idea that sands off some of the unfortunate Pareto-suboptimal edges of prioritarianism. But it has some problems.
Ex-ante prioritarianism looks good in the example cases given where it gives an answer that disagrees with regular prioritarianism but agrees with utilitarianism. However, the cases where ex-ante prioritarianism disagrees with
For instance, consider an extension of your experiment:
Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).
Suppose there are two people, A and B, who are equally well off with utility 100. Suppose we have the choice between two options. In Lottery 1, A gets a benefit of 100 with certainty, while B gets nothing. In Lottery 2, either A gets 50 with probability 0.4; B gets 50 with probability 0.4, or no-one gets anything (probability 0.2).
Prioritarianism prefers Lottery 1 to Lottery 2, since one person having a welfare of 100 and the other a welfare of 200 is preferred to an 80% chance of (150, 100) and a 20% chance of (100, 100).
Utilitarianism of course prefers the outcome with expected utility 300 to the outcome with expected utility 240.
But a sufficiently concave ex-ante prioritarianism prefers Lottery 2 because Bâs lower expected value in Lottery 1 is weighted more highly.
It seems perverse to prefer an outcome which is with certainty worse both on utilitarian and prioritarian grounds just to give B a chance to be the one who is on top.
I wonât say Iâm convinced by my own responses here, but Iâll offer them anyway.
I think B could reasonably claim that Lottery 1 is less fair to them than Lottery 2, while A could not claim that Lottery 2 is less fair to them than Lottery 1 (it benefits them less in expectation, but this is not a matter of fairness). This seems a bit clearer with the understanding that von Neumann-Morgenstern rational agents maximize expected (ex ante) utility, so an individualâs ex ante utility could matter to that individual in itself, and an ex ante view respects this. (And I think the claim that ex post prioritarianism is Pareto-suboptimal may only be meaningful in the context of vNM-rational agents; the universe doesnât give us a way to make tradeoffs between happiness and suffering (or other values) except through individual preferences. If weâre hedonistic consequentialists, then we canât refer to preferences or the veil of ignorance to justify classical utilitarianism over hedonistic prioritarianism.)
Furthermore, if you would imagine repeating the same lottery with the same individuals and independent probabilities over and over, youâd find in the long run, either in Lottery 1, A would benefit by 100 on average and B would benefit by 0 on average, or with Lottery 2, A would benefit by 20 on average and B would benefit by 20 on average. On these grounds, a prioritarian could reasonably prefer Lottery 2 to Lottery 1. Of course, an ex post prioritarian would come to the same conclusion if theyâre allowed to consider the whole sequence of independent lotteries and aggregate each individualâs own utilities within each individual before aggregating over individuals.
(On the other hand, if you repeat Lottery 1, but swap the positions of A and B each time, then Lottery 1 benefits A by 50 on average and B by 50 on average, and this is better than Lottery 2. The utilitairan, ex ante prioritarian and ex post prioritarian would all agree.)
A similar problem is illustrated in âDecide As You Would With Full Information! An Argument Against Ex Ante Paretoâ by Marc Fleurbaey & Alex Voorhoeve (I read parts of this after I wrote the post). You can check Table 1 on p.6 and the surrounding discussion. Iâm changing the numbers here. EDIT: I suppose the examples can be used to illustrate the same thing (except the utilitarian preference for Lottery 1): Ex post you prefer Lottery 1 and would realize youâd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, youâd also prefer Lottery 1 and want to change your mind.
So, in your ignorance, you would treat MILD, but if you found out who had SEVERE and who had MILD, no matter which way it goes, youâd realize you had made a mistake. You also know that seeking out this information of who has which ahead of time, no matter which way it goes, will cause you to change your mind about which disease to treat. EDIT: I suppose both of these statements are true of your example. Ex post you prefer Lottery 1 and would realize youâd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, youâd also prefer Lottery 1.
Interesting ideas. :)
If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of â10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyoneâs probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)? This is despite the fact that in terms of realized outcomes, the difference between one person having 49.99% chance of the pain vs 50.01% is pretty minimal.
What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on oneâs view about that). If thereâs no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.
One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do âIâ stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.
Here are some interesting examples I thought of. If I rearranged someoneâs brain cells (and maybe atoms) to basically make a (possibly) completely different brain structure with (possibly) completely different memories and personality, should we consider these different individuals? Consider the following cases:
1. What if all brain function stops, I rearrange their brain, and then brain function starts again?
2. What if all brain function stops, I rearrange their brain to have the same structure (and memories and personality), but with each atom/âcell in a completely different area from where it started, and then brain function starts again?
3. What if all brain function stops, the cells and atoms move or change as they naturally would without my intervention, and then brain function starts again?
To me, 1 clearly brings about a completely different individual, and unless weâre willing to say that two physically separate people with the same brain structure, memories and personality are actually one individual, I think 2 should also bring about a completely different individual. 3 only really differs from 1 and 2 only by degree of change, so I think it should also bring about a completely different individual, too.
What this tells me is that if weâre going to use some kind of continuity to track identity at all, it should also include continuity of conscious experiences. Then we have to ask:
Are there frequent (e.g. daily) discontinuities or breaks in a personâs conscious experiences?
Whether there are or not, should our theory of identity even depend on this fact? If it happened to be the case that sleep involved such discontinuities/âbreaks and people woke up as completely different individuals, would our theory of identity be satisfactory?
Maybe a way around this is to claim that there are continuous degrees of identification between a person at different moments in their life, e.g. me now and me in a week are only 99% the same individual. Iâm not sure how we could do ethical calculus with this, though.
Thought experiments like these are why I regard personal identity, and any moral theories that depend on it, as non-starters (including versions of prioritarianism that consider lifetime wellbeing collectively). I think itâs best to think either in terms of empty individualism or open individualism. Empty individualism tends to favor suffering-focused views because any given moment of unbearable suffering canât be compensated by other moments of pleasure even within what we normally call the same individual, because the pleasure is actually experienced by a different individual. Open individualism tends to undercut suffering-focused intuitions by saying that torturing one person for the happiness of a billion others is no different than one person experiencing pain for later pleasure.
As others have pointed out before, it is legitimate to try to salvage some ethical concern for personal identity despite the paradoxes. By analogy, the idea of consciousness has many paradoxes, but I still try to salvage it for my ethical reasoning. Neither personal identity nor consciousness âactually existsâ in any deep ontological sense, but we can still care about them. Itâs just that I happen not to care ethically about personal identity.
Yes, although itâs possible that a single individual even having a 100% possibility of pain might not outweigh the pleasure of the others, if the number of other individuals is large enough and the social welfare function is sufficiently continuous and âadditiveâ, e.g. it takes the form S(V)=âi:viâVf(vi) for f:RâR strictly increasing everywhere.
I intended for your own subjective probability distribution to be used, but what you say here leads to some more weird examples (besides collapsing to regular prioritarianism (possibly while aggregating actual utilities over each individual first before aggregating across them)):
Iâve played a board game where the player who gets to go first is the one who has the pointiest ears. The value of this outcome would be different if you knew ahead of time who this would be compared to if you didnât. In particular, if thereâs were morally significant tradeoff between utilities, then this rule could be better or worse than a more (subjectively) random choice, depending on whether the worse off players are expected to benefit more or less. Of course, a random selection could be better or worse than one whose actual outcome you know in advance for utilitarians, but there are some differences.
For ex ante prioritarianism, this is also the case before and after you would realize the outcome of the rolls of dice or coin flips; once you realize what the outcome of the random selection is, itâs no longer random, and the value of following through with it changes. In particular, if each person had the same wellbeing before the rolls of the dice and stood to gain or lose the same amount if they won (regardless of the selection process), then random selection would be optimal and better than any fixed selection with whose outcome you know in advance, but once you know the outcome of the random selection process, before you apply it, it reduces to using any particular rule whose outcome you know in advance.
Yes, I think itâs basically the same issue. If we can use something like spatiotemporal continuity (I am doubtful that this can be made precise and coherent enough in a way thatâs very plausible), then we could start before a person is even conceived. Right before conception, the sperm cells and ova could be used to determine the identities of the potential future people. Before the sperm cell used in conception even exists, you could imagine two sperm cells with different physical (spatiotemporal) origins in different outcomes that happen to carry the same genetic information, and you might consider the outcomes in which one is used to have a different person than the outcomes in which the the other is. Of course, you might have to divide up these two groups of outcomes further still. For example, you wouldnât want to treat identical twins as a single individual, even if they originated from some common group of cells.
Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so itâd be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/âher, relative to helping an atheist who expects to die at age 75. Thatâs true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.
Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system (âyouâ) applied to everyone, not using their own subjective probabilities.
Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, Iâm doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals donât have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.
Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).
I see. :) It seems like weâd still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who arenât signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.
(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having âreasonableâ beliefs about the world.)
Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/âwealth), so people who have higher EU for their lives should be given (slightly) less weight.
If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.
Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/â3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that thereâs no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 â 1/â3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we neednât bother because other uses of the money would be more cost-effective, even though itâs basically guaranteed that this personâs life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.
I guess an ex ante supporter could say that if someone chooses the 1/â3^^^3 gamble and it doesnât work out, thatâs the price you pay for taking the risk. But that stance feels pretty harsh.
I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.
I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldnât value anything at values â„ 3^^^^3 (or †â3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.
Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?
Also, FWIW, itâs possible to blend ex ante and ex post views. An individualâs actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.
Interesting. :)
I was thinking that itâs not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.