1. Each individual in the group is rational (for a commonly used but technical definition of “rational”, hereafter referred to as “VNM-rational”)[1][2]
3. If every individual in the group is indifferent between two options, then the group as a whole is indifferent between those two options
One way of motivating 3 is by claiming (in the idealistic case where everyone’s subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and what’s better for individuals, and not having it do so is paternalistic. We should trust people’s preferences when they’re rational since they know what’s best for themselves.
b. When everyone’s preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so we’d have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim what’s good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing “dispreferred by everyone” with “worse in expectation for each individual”) to defend the following condition, which recovers the theorem:
3′. If for two options and for each individual in the options, their expected welfare is the same in the two options, then we should be ethically indifferent between the options.
*As alluded to here, if your ethical ranking of choices broke one of these ties so A≻B, it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C that’s worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1−p)C≻B for any p∈(0,1)close enough to 1, while everyone has the opposite preference over these two choices.
One way of motivating 3 is by claiming (in the idealistic case where everyone’s subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and what’s better for individuals, and not having it do so is paternalistic. We should trust people’s preferences when they’re rational since they know what’s best for themselves.
b. When everyone’s preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so we’d have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim what’s good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing “dispreferred by everyone” with “worse in expectation for each individual”) to defend the following condition, which recovers the theorem:
*As alluded to here, if your ethical ranking of choices broke one of these ties so A≻B, it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C that’s worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1−p)C≻B for any p∈(0,1) close enough to 1, while everyone has the opposite preference over these two choices.