Harsanyi’s simple “proof” of utilitarianism
In 1955, John Harsanyi published a paper demonstrating that anyone who follows certain reasonable assumptions must be a total utilitarian. The paper is somewhat technical, but the result is relatively easy to understand. I’ve been unable to find a nontechnical summary of this result and so, because it is one of the more compelling arguments for utilitarianism, I decided to write one up.
Background
Suppose a group of friends are deciding where to eat. Each individual person has some preference (say, one person most prefers Chinese, then Italian, then Japanese; another prefers Italian, then Chinese, then Japanese) but there is no clear restaurant which everyone thinks is best. How should they choose a place?
One solution is to have each person attach a numeric score to how much they would enjoy a given restaurant. If you really like Chinese food, then maybe you give it 10 points; if you’re lukewarm then you give it 2, and if you really hate Chinese then maybe it’s −5.
Once each person has voted, you simply add up all the scores, and then the group goes to whichever restaurant had the highest total score.
This method is (a simplified form of) “total” utilitarianism, and Harsanyi demonstrated that it is the only “reasonable” way that groups can make a decision.
Theorem
Concretely, assume:

Each individual in the group is rational (for a commonly used but technical definition of “rational”, hereafter referred to as “VNMrational”)^{[1]}^{[2]}

The group as a whole is VNMrational^{[3]}^{[4]}

If every individual in the group is indifferent between two options, then the group as a whole is indifferent between those two options
The theorem proves that total utilitarianism is the only method which satisfies these three assumptions.
Note that this theorem just demonstrates that, if there is some way of saying that certain things are better or worse for individuals, then the way to determine whether those things are better or worse for groups is to add up how good it is for the individuals in those groups. It doesn’t say anything about the way in which things can be better or worse for individuals. I.e. you could be adding up each individual’s happiness (hedonistic utilitarianism), something related to their preferences (preference utilitarianism), or something more exotic.
Example
The above is somewhat abstract, so here is a concrete example demonstrating why anything other than total utilitarianism fails these axioms. (This is my best attempt at creating a simple example; perhaps others in the comments can create even simpler ones.)
Consider a population consisting of 2 people. Because they are VNMrational, they have utility functions, and therefore we can represent states of the world as a vector of numbers. E.g. the vector is a world in which the first person has utility 5 and the second has utility 7.
Let’s prove that the world must be as good as the world .
Consider a lottery in which there is a onehalf chance we end up with the world and a onehalf chance that we end up with the world . Because we are indifferent between who has the 2 and who has the 0,^{[5]} and the group is an expected utility maximizer, these are equally valuable:^{[6]}
We can we write this from the perspective of each individual in society:
Because VNMrational agents are expected utility maximizers we can just multiply the probabilities through:^{[7]}
QED.
The key insight here is that each individual is indifferent between the “50% chance of 2, 50% chance of 0” and “guaranteed chance of 1” lotteries (on account of being VNMrational). Because each individual is indifferent, the group is also forced to be indifferent (on account of the third assumption).
Conclusion
Total utilitarianism is a fairly controversial position. The above example where can be extended to show that utilitarianism is extremely demanding, potentially requiring extreme sacrifices and inequality.
It is therefore interesting that it is the only decision procedure which does not violate one of these seemingly reasonable assumptions.
While not conclusive, this theorem provides a compelling argument for total utilitarianism.
Appendix on Equality
Harsanyi’s original theorem allowed for weighted total utilitarianism. (I.e. everyone gets a vote, but some people’s votes count more than others.)
It’s easy enough to add an assumption like “also everyone is equal” to force true total utilitarianism, but interestingly Harsanyi didn’t think that was necessary:
This implies, however, without any additional ethical postulates that an individual’s impersonal preferences, if they are rational, must satisfy Marschak’s axioms [equivalent to VNMrationality] and consequently must define a cardinal social welfare function equal to the arithmetical mean of the utilities of all individuals in the society (the arithmetical mean of all individual utilities gives the actuarial value of his uncertain prospects, defined by an equal probability of being put in the place of any individual in the situation chosen). [Emphasis added]
In other words, he thinks it would be irrational to weight people unevenly, because equal weighting is the expected utilitymaximizing choice if you don’t know which person in society you will become.
This idea of making decisions behind a veil of ignorance where you don’t know which person in society you will become was later popularized by John Rawls, who used it to argue for his Minimax decision rule.
It is, in my humble opinion, unfortunate that the veil of ignorance has become associated with Rawls, when Harsanyi’s utilitarian formulation has a much more rigorous mathematical grounding. (And was also published earlier.)
Credits
I would like to thank Aaron Gertler, Sam Deere, Caitlin Elizondo and the CEA UK office staff for comments on drafts of this post and discussions about related ideas.
Harsanyi used Marschak’s axioms, which are mathematically equivalent to the VNM ones, but less popular. I’m using VNM here just because they seem better known. ↩︎
“Rational” is a somewhat unfortunate term, but I’m sticking with it because it’s standard. These axioms are intended to prevent things like “Ben likes apples more than bananas but also likes bananas more than apples.” It’s not intended to prevent “irrational” value judgments like enjoying Nickelback’s music. A better term might be something like “consistent”. ↩︎
It’s a wellknown consequence of this assumption that the group must be “utilitarian” in the sense that it has a utility function. The surprising part of Harsanyi’s theorem is not that there is a utility function but rather that the utility function must be a linear addition of its constituents’ utility functions (as opposed to, say, their average or the sum of their logarithms or something completely disconnected from its constituents’ utility.). ↩︎
An example of what it means for a group decision to be VNMrational: if the group somehow aggregates its preferences (through voting or reading entrails or whatever) and decides that Chinese is preferable to Italian, and also that Italian is preferable to Japanese, then the group must also conclude that Chinese is preferable to Japanese. We don’t care how it’s aggregating its preferences, but it must do so in a “rational” way. ↩︎
Note that this isn’t clearly implied by the assumptions – see the appendix on equality. Harsanyi’s original proof does not require any assumptions about equality, but this sort of assumption makes the proof much simpler and seems unlikely to be a point of controversy, so I’m including it. ↩︎
More precisely: for some group utility . Because of the VNM axioms, . (Normalizing .) Therefore, . I’m still skipping some steps; people interested in a more rigorous proof should see his original paper. ↩︎
More precisely: each individual is indifferent between a lottery where they are guaranteed 1 utility versus having a 50% chance of 2, 50% chance of 0. Since each individual is different between these, the group is also indifferent. ↩︎
 What posts do you want someone to write? by 24 Mar 2020 6:41 UTC; 37 points) (
 8 Apr 2020 20:59 UTC; 20 points) 's comment on If you value future people, why do you consider near term effects? by (
 8 Aug 2020 20:29 UTC; 6 points) 's comment on EA reading list: population ethics, infinite ethics, anthropic ethics by (
 25 Jul 2020 5:47 UTC; 2 points) 's comment on Utilitarianism with and without expected utility by (
Thanks for writing this up!
For those interested in more info:
Harsanyi had two different theorems like this (his aggregation theorem and his impartial observer theorem) which rely on slightly different assumptions.
The main arguments against Harsanyi’s theorems were made by prominent economist Amartya Sen in what has become known as the “HarsanyiSen debate” or “HarsanyiSenWeymark debate” (searchable terms). The gist of the counterargument is that “while Harsanyi has perhaps shown that overall good is a linear sum of individuals’ von Neumann–Morgenstern utilities, he has done nothing to establish any connection between the notion of von Neumann–Morgenstern utility and that of wellbeing, and hence that utilitarianism does not follow.”.
I think this last point essentially denies the third axiom above, which is what connects individual vNM utility and social/ethical preferences. (The original statement of the second axiom is just vNM rationality for social/ethical preferences, and has no relationship with the individuals’ preferences.)
Thanks for writing this!
I don’t think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example. Furthermore, if you don’t count the interests of people who exist until after they exist or unless they come to exist, it probably won’t look like total utilitarianism, although it gets more complicated.
You might be interested in Teruji Thomas’ paper “The Asymmetry, Uncertainty, and the Long Term” (EA Forum post here), which proves a similar result from slightly different premises, but is compatible with all of 1) ex post prioritarianism, 2) mere addition, 3) the procreation asymmetry, 4) avoiding the repugnant conclusion and 5) avoiding antinatalism, and all five of these all at the same time, because it sacrifices the independence of irrelevant alternatives (the claim that how you rank choices should not depend on what choices are available to you, not the vNM axiom). Thomas proposes beatpath voting to choose actions. Christopher Meacham’s “Personaffecting views and saturating counterpart relations” also provides an additive calculus which “solves the NonIdentity Problem, avoids the Repugnant and Absurd Conclusions, and solves the MereAddition Paradox” and satisfies the asymmetry, also by giving up the independence of irrelevant alternatives, but hasn’t, as far as I know, been extended to deal with uncertainty.
I’ve also written about ex ante prioritarianism in the comments on the EA Forum post about Thomas’ paper, and in my own post here (with useful feedback in the comments).
Thanks!
Well, average utilitarianism is consistent with the result because it gives the same answer as total utilitarianism (for a fixed population size). The vast majority of utility functions one can imagine (including ones also based on the original position like maximin) are ruled out by the result. I agree that the technical result is “anything isomorphic to total utilitarianism” though.
I had not seen that, thanks!
I am not an expert in this topic but I believe this recent paper is relevant and may derive a result that is more general than Harsanyistyle utilitarianism https://www.sciencedirect.com/science/article/pii/S0304406820300045
Perhaps I’m missing something, but where does this claim come from? It doesn’t seem to follow from the three starting assumptions.
Yeah, it doesn’t (obviously) follow. See the appendix on equality. It made the proof simpler and I thought most readers would not find it objectionable, but if you have a suggestion for an alternate simple proof I would love to hear it!
Thanks for writing this up! I agree that this result is interesting, but I find it unpersuasive as a normative argument. Why should morality be based on group decisionmaking principles? Why should I care about VNM rationality of the group?
Also, you suggest that this result lends support to common EA beliefs. I’m not so sure about that. First, it leads to preference utilitarianism, not hedonic utilitarianism. Second, EAs tend to value animals and future people, but they would arguably not count as part of the “group” in this framework(?). Third, I’m not sure what this tells you about the creation or noncreation of possible beings (cf. the asymmetry in population ethics).
Finally, it’s worth pointing out that you could also start with different assumptions and get very different results. For instance, rather than demanding that the group is VNM rational, one could consider rational individuals in a group who bargain over what to do, and then look at bargaining solutions. And it turns out that the utilitarian approach of adding up utilities is *not* a bargaining solution, because it violates Paretooptimality in some cases. Does that “disprove” total utilitarianism?
(Using e.g. the Nash bargaining solution with many participants probably leads to some form of prioritarianism or egalitarianism, because you’d have to ensure that everyone benefits.)
+1
I have a strongly negative bias against any attempt to ground normative theories in abstract mathematical theories, such as game theory and decision theory. The way I see it, the two central claims of utilitarianism are the axiological claim (wellbeing is what matters) and the maximizing claim (we should maximize what matters ie. wellbeing). This argument provides no reason to ground our axiology in wellbeing, and also provides no reason that we should be maximizers.
In general, there is a significant difference between normative claims, like total utilitarianism, and factual claims, like “As a group, VNM rational agents will do X.”
Thanks for the comment!
Hmm, I wasn’t trying to suggest that, but I might have accidentally implied something. I would be curious what you are pointing to?
I used preferences about restaurants as an example because that seemed like something people can relate to easily, but that’s just an example. The theorem is compatible with hedonic utilitarianism. (In that case, the theorem would just prove that the group’s utility function is the sum of each individual’s happiness.)
I don’t think that this theorem says much about who you aggregate. It’s just simply stating that if you aggregate some group of persons in a certain way, then that aggregation must take the form of addition.
I agree it doesn’t say much, see e.g. Michael’s comment.
In that case, it would IMO be better to change “total utilitarianism” to “utilitarianism” in the article. Utilitarianism is different from other forms of consequentialism in that it uses thoroughgoing aggregation. Isn’t that what Harsanyi’s theorem mainly shows? It doesn’t really add any intuitions about population ethics. Mentioning the repugnant conclusion in this context feels premature.
Hmm, it does show that it’s a linear addition of utilities (as opposed to, say, the sum of their logarithms). So I think it’s stronger than saying just “thoroughgoing aggregation”.
I’m not very familiar with the terminology here, but I remember that in this paper, Alastair Norcross used the term “thoroughgoing aggregation” for what seems to be linear addition of utilities in particular. That’s what I had in mind anyway, so I’m not sure I believe anything different form you. The reason I commented above was because I don’t understand the choice of “total utilitarianism” instead of just “utilitarianism.” Doesn’t every form of utilitarianism use linear addition of utilities in a case where population size remains fixed? But only total utilitarianism implies the repugnant conclusion. Your conclusion section IMO suggests that Harsanyi’s theorem (which takes a case where population size is indeed fixed) does something to help motivate total utilitarianism over other forms of utilitarianism, such as priorexistence utilitarianism, negative utilitarianism or average utilitarianism. You already acknowledged in your reply further above to that it doesn’t do much of that. That’s why I suggested rephrasing your conclusion section. Alternatively, you could also explain in what ways you might think the utilitarian alternatives to total utilitarianism are contrived somehow or not in line with Harsanyi’s assumptions. And probably I’m missing something about how you think about all of this, because the rest of the article seemed really excellent and clear to me. I just find the conclusion section really jarring.
Ah, my mistake – I had heard this definition before, which seems slightly different.
Thanks for the suggestion – always tricky to figure out what a “straightforward” consequence is in philosophy.
I changed it to this – curious if you still find it jarring?
Probably I was wrong here. After reading this abstract, I realize that the way Norcross wrote about it is compatible with a weaker claim that linear aggregation of utility too. I think I just assumed that he must mean linear aggregation of utility, because everything else would seem weirdly arbitrary. :)
Less so! The “total” still indicates the same conclusion I thought would be jumping the gun a bit, but if that’s your takeaway it’s certainly fine to leave it. Personally I would just write “utilitarianism” instead of “total utilitarianism.”
In this case, I think it’s harder to argue that we should care about ex ante expected individual hedonistic utility and for the 1st and 3rd axioms, because we had rationality based on preferences and something like Pareto to support these axioms before, but we could now just be concerned with the distribution of hedonistic utility in the universe, which leaves room for prioritarianism and egalitarianism. I think the only “nonpaternalistic” and possibly objective way to aggregate hedonistic utility within an individual (over their life and/or over uncertainty) would be to start from individual preferences/attitudes/desires but just ignore concerns not about hedonism and nonhedonistic preferences, i.e. an externalist account of hedonism. Roger Crisp defends internalism in “Hedonism Reconsidered”, and defines the two terms this way:
Otherwise, I don’t think there’s any reason to believe there’s an objective common cardinal scale for suffering and pleasure, even if there were a scale for suffering and a separate scale for pleasure. Suffering and pleasure don’t use exactly the same parts of the brain, and suffering isn’t just an “opposite” pattern to pleasure. Relying on mixed states, observing judgements when both suffering and pleasure are happening at the same time might seem promising, but these judgements happen at a higher level and probably wouldn’t be consistent between people, e.g. you could have two people with exactly the same suffering and pleasure subsystems, but with different aggregating systems.
I’m personally more sympathetic to externalism. With antifrustrationism (there are actually arguments for antifrustrationism; see also my comment here), externalism leads to a negative hedonistic view (which I discuss further here).
It doesn’t have to be the group, it can be an impartial observer with their own social welfare function, as long as it is increasing with individual expected utility, i.e. satisfies ex ante Pareto. Actually, that’s how it was originally stated.
EDIT: woops, condition 2 is weaker than ex ante Pareto; it’s just vNM rationality with respect to outcomes for social/ethical preferences/views. It’s condition 3 that connects individual vNM utility and social/ethical vNM utility.
I’m not sure this is right. As soon as you maximize a weighted sum with nonnegative coefficients your solution will be weakly Pareto optimal. As soon as all coefficients are strictly positive, it will be strongly Pareto optimal. The axioms mentioned above don’t imply nonnegative coefficients, so theoretically they are also satisfied by “antiutilitarianism” which counts everyone’s utility negatively. But one can add stronger Pareto axioms to force all coefficients to be strictly positive.
The problem with the utilitarian Bargaining solution is that it is not independent of affine transformations of utility functions. Just summing up utility functions is underspecified, one also needs to choose a scaling for the utility functions. A second criterion that might not be satisfied by the utilitarian solution (depending on the scaling chosen) is individual rationality, which means that everyone will be better off given the bargaining solution than some disagreement outcome.
You’re right; I meant to refer to the violation of individual rationality. Thanks!
I’ve retracted my previous reply. The original 2nd condition is different from ex ante Pareto; it’s just vNM rationality with respect to outcomes for social/ethical preferences/views and it says nothing about the relationship between individual preferences and social/ethical ones. It’s condition 3 that connects individual vNM utility and social/ethical vNM utility.
I think this math is interesting, and I appreciate the good pedagogy here. But I don’t think this type of reasoning is relevant to my effective altruism (defined as “figuring out how to do the most good”). In particular, I disagree that this is an “argument for utilitarianism” in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.
(I really do mean “me” and “my” in that sentence; other people may find that this argument can indeed convince them of this, and that’s a fact about them I have no quarrel with. I’m posting this because I just want to put a signpost saying “some people in EA believe this,” in case others feel the same way.)
Following Richard Ngo’s post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/argumentsformoralindefinability, I don’t think that human moral preferences can be made free of contradiction. Although I don’t like contradictions and I don’t want to have them, I also don’t like things like the repugnant conclusion, and I’m not sure why the distaste towards contradictions should be the one that always triumphs.
Since VNMrationality is based on transitive preferences, and I disagree that human preferences can or “should” be transitive, I interpret things like this as without normative weight.
I think this is an important point. People might want to start with additional or just different axioms, including, as you say, avoiding the repugnant conclusion, and if they can’t all together be consistent, then this theorem may unjustifiably privilege a specific subset of those axioms.
I do think this is an argument for utilitarianism, but more like in the sense of “This is a reason to be a utilitarian, but other reasons might outweigh it.” I think it does have some normative weight in this way.
Also, independence of irrelevant alternatives is safer to give up than transitivity, and might accomplish most of what you want. See my other comment.
Thanks for the pointer to “independence of irrelevant alternatives.”
I’m curious to know how you think about “some normative weight.” I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.
For example, it’s unlikely to be the case that anyone’s ethical rankings actually satisfy the vNM rationality conditions in practice, but if you give any weight to the claims that we should have ethical rankings that are complete, continuous with respect to probabilities (which are assumed to work in the standard way), satisfy the independence of irrelevant alternatives and avoid all theoretical (weak) Dutch books, and also give weight to the combination of these conditions at once*, then the Dutch book results give you reason to believe you should satisfy the vNM rationality axioms, since if you don’t, you can get (weakly) Dutch booked in theory. I think you should be at least as sympathetic to the conclusion of a theorem as you are to the combination of all of its assumptions, if you accept the kind of deductive logic used in the proofs.
*I might be missing more important conditions.
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.
On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling. When I think about “Total utilitarians are the only ones that satisfy these three assumptions” I don’t get the same positive feeling.
When it comes to ethics, it’s the emotional arguments that really win me over.
If you want to deal with moral uncertainty with credences, you could assign each of the 3 major assumptions an independent credence of 50%, so this argument would tell you should be utilitarian with credence at least 123=18=12.5%. (Assigning independent credences might not actually make sense, in case you have to deal with contradictions with other assumptions.)
Makes sense. For what it’s worth, this seems basically compatible with any theory which satisfies the Pareto principle, and I’d imagine you’d also want it to be impartial (symmetry). If you also assume realvalued utilities, transitivity, independence of irrelevant alternatives, continuity and independence of unconcerned agents, you get something like utilitarianism again. In my view, independence of unconcerned agents is doing most of the work here, though.
Like other links between VNM and Utilitarianism, this seems to roll intersubjective utility comparison under the rug. The agents are likely using very different methods to convert their preferences to the given numbers, rendering the aggregate of them non rigorous and subject to instability in iterated games.
I can’t tell whether you are denying assumption 1 or 2.
I don’t think Romeo even has to deny any of the assumptions. Harsanyi’s result, derived from the three assumptions, is not enough to determine how to do intersubjective utility comparisons. It merely states that social welfare will be some linear combination of individual utilities. While this already greatly restricts the way in which utilities are aggregated, it does not specify which weights to use for this sum.
Moreover, arguing that weights should be equal based on the veil of ignorance, as I believe Harsanyi does, is not sufficient, since utility functions are only determined up to affine transformations, which includes rescalings. (This point has been made in the literature as a criticism of preference utilitarianism, I believe.) So there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
Of course, if you are not using a kind of preference utilitarianism but instead just aggregate some quantities you believe to have an absolute scale—such as happiness and suffering—then you could argue that utility functions should just correspond to this one absolute scale, with the same scaling for everyone. Though I think this is also not a trivial argument—there are potentially different ways to get from this absolute scale or Axiology to behavior towards risky gambles, which in turn determine the utility functions.
> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
yup, thanks. Also across time as well as across agents at a particular moment.
As a fan of Nickelback, I really appreciate fn2.
I want to point out that both assumptions 2, and 1 and 3 together have been objected to by academic philosophers.
Assumption 2 is ex post consequentialism: maximize the expected value of a social welfare function. Ex ante prioriatarianism/egalitarianism means rejecting 2: we should be fair to individuals with respect to their expected utilities, even if this means overall worse expected outcomes. This is, of course, vNM irrational, but Diamond defended it (and see my other comment here). Essentially, even if two outcomes are equally valuable, a probabilistic mixture of them can be more valuable because it gives people fairer chances; this is equality of opportunity. This contradicts the independence axiom specifically for vNM rationality (and so does the Allais paradox).
Assumptions 1 and 3 together are basically a weaker version of ex ante Pareto, according to which it’s (also) better to increase the expected utility of any individual(s) if it comes at no expected cost to any other individuals. Ex post prioritarianism/egalitarianism means rejecting the conjunction of 1 and 3, and ex ante Pareto: we should be more fair to individuals ex post (we want more fair actual outcomes after they’re determined), even if this means worse individual expected outcomes.
There was a whole issue of Utilitas devoted to prioritarianism and egalitarianism in 2012, and, notably, Parfit defended prioritarianism in it, arguing against ex ante Pareto (and hence the conjunction of 1 and 3):
He claimed that we can reject ex ante Pareto (“Probabilistic Principle of Personal Good”), in favour of ex post prioritarianism/egalitarianism:
Here, by “worse off” in the second sentence, he meant in a prioritarian/egalitarian way. The act is actually better for them, because the worse off people under this act are better off than the worse off people under the other act. He continued:
Thanks for this.
Even if this argument is successful, there are debates over decision theory (evidential, causal, functional). Does an ideally rational agent intervene at the level of states, actions, or decision procedures?
If it’s decision procedures, or something similar, functional decision theory can you get views that look quite close to Kantianism.
Just as a side note, Harsanyi’s result is not directly applicable to a formal setup involving subjective uncertainty, such as Savage’s or the JeffreyBolker framework underlying evidential and causal decision theory. Though there are results for the Savage setup too, e.g., https://www.jstor.org/stable/10.1086/421173, and Caspar Oesterheld and I are working on a similar result for the Jeffrey Bolker framework. In this setup, to get useful results, the indifference Axiom can only be applied to a restricted class of propositions where everyone agrees on beliefs.
Some discussion here, too.
One way of motivating 3 is by claiming (in the idealistic case where everyone’s subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and what’s better for individuals, and not having it do so is paternalistic. We should trust people’s preferences when they’re rational since they know what’s best for themselves.
b. When everyone’s preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so we’d have no reason to believe b either if a were false. However, if we use some other realvalued conception of welfare and claim what’s good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing “dispreferred by everyone” with “worse in expectation for each individual”) to defend the following condition, which recovers the theorem:
*As alluded to here, if your ethical ranking of choices broke one of these ties so A≻B, it would do so with a real numbervalued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C that’s worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1−p)C≻B for any p∈(0,1) close enough to 1, while everyone has the opposite preference over these two choices.
I would actually say that 12(2,0)+12(0,2) being equivalent to (2,0) and (0,2) is in contradiction with equality of opportunity. In the first case, both individuals have an equal chance of being welloff (getting 2), but in the second and third, only one has any chance of being welloff, so the opportunities to be welloff are only equal in the first case (essentially the same objection to essentially the same case is made in “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparison of Utility: Comment”, in which Peter Diamond writes “it seems reasonable for the individual to be concerned solely with final states while society is also interested in the process of choice”). This is what ex ante prioritarianism/egalitarianism is for, but it can lead to counterintuitive results. See the comments on that post, and “Decide As You Would With Full Information! An Argument Against Ex Ante Pareto” by Marc Fleurbaey & Alex Voorhoeve.
For literature on equality of outcomes and uncertainty, the terms to look for are “ex post egalitarianism” and “ex post prioritarianism” (or with the hyphen as “expost”, but I think Google isn’t sensitive to this).
Yeah, my point was that exante utility was valued equally, but I think that was confusing. I’m just going to remove that section. Thanks!