1. Each individual in the group is rational (for a commonly used but technical definition of ârationalâ, hereafter referred to as âVNM-rationalâ)[1][2]
3. If every individual in the group is indifferent between two options, then the group as a whole is indifferent between those two options
One way of motivating 3 is by claiming (in the idealistic case where everyoneâs subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and whatâs better for individuals, and not having it do so is paternalistic. We should trust peopleâs preferences when theyâre rational since they know whatâs best for themselves.
b. When everyoneâs preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so weâd have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim whatâs good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing âdispreferred by everyoneâ with âworse in expectation for each individualâ) to defend the following condition, which recovers the theorem:
3â˛. If for two options and for each individual in the options, their expected welfare is the same in the two options, then we should be ethically indifferent between the options.
*As alluded to here, if your ethical ranking of choices broke one of these ties so AâťB, it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C thatâs worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1âp)CâťB for any pâ(0,1)close enough to 1, while everyone has the opposite preference over these two choices.
One way of motivating 3 is by claiming (in the idealistic case where everyoneâs subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and whatâs better for individuals, and not having it do so is paternalistic. We should trust peopleâs preferences when theyâre rational since they know whatâs best for themselves.
b. When everyoneâs preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so weâd have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim whatâs good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing âdispreferred by everyoneâ with âworse in expectation for each individualâ) to defend the following condition, which recovers the theorem:
*As alluded to here, if your ethical ranking of choices broke one of these ties so AâťB, it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C thatâs worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1âp)CâťB for any pâ(0,1) close enough to 1, while everyone has the opposite preference over these two choices.