Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I’m very glad to see this, and I just want to add that there is a recent burst in the literature that is deepening or expanding the Harsanyi framework. There are a lot of powerful arguments for aggregation and separability, and it turns out that aggregation in one dimension generates aggregation in others. Broome’s Weighing Goods is an accessible(ish) place to start.
Fleurbaey (2009): https://www.sciencedirect.com/science/article/abs/pii/S0165176509002894
McCarthy et al (2020): https://www.sciencedirect.com/science/article/pii/S0304406820300045?via%3Dihub
A paper of mine with Zuber, about risk and variable population together: https://drive.google.com/file/d/1xwxAkZlOeqc4iMeXNhBFipB6bTmJR6UN/view
As you can see, this literature is pretty technical for now. But I am optimistic that in 10 years it will be the case both that the experts much better understand these arguments and that they are more widely known and appreciated.
Johan Gustafsson is also working in this space. This link isn’t about Harsanyi-style arguments, but is another nice path up the mountain: https://johanegustafsson.net/papers/utilitarianism-without-moral-aggregation.pdf
Thanks, this is appreciated!
It’s worth noting that while the rationality axioms on which Harsanyi’s theorem depends are typically justified by Dutch book arguments, money pumps or the sure-thing principle, an expected utility maximization with an unbounded utility function (e.g. risk-neutral total utilitarianism) with infinitely many possible outcomes is actually vulnerable to Dutch books and money pumps and violates the sure-thing principle. See, e.g. Paul Christiano’s comment with St. Petersburg lotteries. To avoid the issue, you can commit to sticking with the ex ante better option, even though you know you’d later want to break the commitment. But such commitments can be used for other Dutch books, too, e.g. you can commit to never completing the last step of a Dutch book, or, on a more ad hoc basis, anticipate Dutch books and try to avoid them on a more ad hoc basis.
The continuity axiom is also intuitive and I’d probably accept continuity over ranges of individual utilities at least, anyway, but I wouldn’t take it to be a requirement of rationality, and without it, maximin/leximin/Rawl’s difference principle and lexical thresholds aren’t ruled out.
Hi Holden, thanks for this very nice overview of Harsanyi’s theorem!
One view that is worth mentioning on the topic of interpersonal comparisons is John Broome’s idea (in Weighing Goods) that the conclusion of the theorem itself tells us how to make interpersonal comparisons (though it presupposes that such comparisons can be made). Harsanyi’s premises imply that the social/ethical preference relation can be represented by the sum of individual utilities, given a suitable choice of utility function for each person. Broome’s view is that this provides provide the basis for making interpersonal comparisons of well-being:
And: “the quantity of people’s good acquires its meaning in such a way that the total of people’s good is equal to general good” (222).
I don’t think Broome is right (see my Aggregation Without Interpersonal Comparisons of Well-Being), but the view is worth considering if you aren’t satisfied with the other possibilities. I tend to prefer the view that there is some independent way of making interpersonal comparisons.
On another note: I think the argument for the existence of utility monsters and legions (note 17) requires something beyond Harsanyi’s premises (e.g., that utilities are unbounded). Otherwise I don’t see why “Once you have filled in all the variables except for U_M [or U_K], there is some value for U_M [or U_K] that makes the overall weighted sum come out to as big a number as you want.” Sorry if I’m missing something!
I haven’t gone through this whole post, but I generally like what I have seen.
I do want to advertise a recent paper I published on infinite ethics, suggesting that there are useful aggregative rules that can’t be represented by an overall numerical value, and yet take into account both the quantity of persons experiencing some good or bad and the probability of such outcomes: https://academic.oup.com/aristotelian/article-abstract/121/3/299/6367834
The resulting value scale is only a partial ordering, but I think it gets intuitive cases right, and is at least provably consistent, even if not complete. (I suspect that for infinite situations, we can’t get completeness in any interesting way without using the Axiom of Choice, and I think anything that needs the Axiom of Choice can’t give us any reason for why it rather than some alternative is the right one.)
It seems to me that no amount of arguments in support of individual assumptions, or a set of assumptions taken together, can make their repugnant conclusions more correct or palatable. It is as if Frege’s response to Russel’s paradox were to write a book exalting the virtues of set theory. Utility monsters and utility legions show us that there is a problem either with human rationality or human moral intuitions. If not them than the repugnant conclusion does for sure, and it is an outcome of the same assumptions and same reasoning. Personally, I refuse to bite the bullet here which is why I am hesitant to call myself a utilitarian. If I had to bet, I would say the problem lies with assumption 2. People cannot be reduced to numbers either when trying to describe their behavior or trying to guide it. Appealing to an “ideal” doesn’t help, because the ideal is actually a deformed version. An ideal human might have no knowledge gaps, no bias, no calculation errors, etc. but why would their well being be reducible to a function?
(note that I do not dispute that from these assumptions Harsanyi’s Aggregation Theorem can be proven)
It’s also worth mentioning that this Pareto efficiency assumption, applied to expected utilities over uncertain options and not just actual utilities over deterministic options, rules out (terminal) other-centered preferences for utility levels to be distributed more equally (or less equally) ex post, as well as ex post prioritarian and ex post egalitarian social welfare functions.
You would be indifferent between these two options over the utility levels of Alice and Bob (well, if you sweeten either an arbitrarily low amount, you should prefer the sweetened one; continuity gives you indifference without sweetening):
50% chance of 1 for Alice and 0 for Bob, and 50% chance of 0 for Alice and 1 for Bob.
50% chance of 1 for each and 50% chance of 0 for each.
But an ex post egalitarian might prefer 2, since the utilities are more equal in each definite outcome.
Between the two following options, an ex post prioritarian or ex post egalitarian might strictly prefer the first, as well, even though the expected utilities are the same (using the numbers as the same utilities used to for expected utilities):
50% chance of −1 for Alice and a 50% chance of 1 for Alice.
100% chance of 0 for Alice.
HAT’s assumptions together also rule out preferences for or against ex ante equality and so ex ante priotarianism and ex ante egalitarianism, i.e., you should be indifferent between the following two options, even though the first seems more fair ex ante:
50% chance of 1 for Alice and 0 for Bob, and 50% chance of 0 for Alice and 1 for Bob.
100% chance of 1 for Alice and 0 for Bob.
Since I first read your piece on future-proof ethics my views have evolved from “Not knowing about HAT” to “HAT is probably wrong/confused, even if I can’t find any dubious assumptions” and finally “HAT is probably correct, even though I do not quite understand all the consequences”.
I would probably not have engaged with HAT if not for this post, and now I consider it close in importance to VNM’s and Cox’ theorems in terms of informing my worldview.
I particularly found the veil of ignorance framing very useful to help me understand and accept HAT.
I’ll probably be coming back to this post and mulling over it to understand better what the heck I have commited to.
I think you need “prefers that situation for themselves”. Otherwise, imagine person X who is a utilitarian—they’ll always prefer a better world, but most ways of making the world better don’t “benefit X”.
Then, unfortunately, we run into the problem that we’re unable to define what it means to prefer something “for yourself”, because we can no longer use (even idealised) choices between different options.
Good point, thanks! Edited.
This seems incorrect. Rather, it is your 4 assumptions that “lead naturally” to utilitarianism. It would not be hard for a deontologist to be other-focused simply by emphasizing the a-priori normative duties that are directed towards others (I am thinking here of Kant’s duties matrix: perfect / imperfect & towards self / towards others). The argument can even be made, and often is, that the duties that one has towards one’s self are meant to allow one to benefit others (i.e. skill development). If by other-focused you mean abstracting from one’s personal preferences, values, culture and so forth, deontology might be the better choice, since its use of a-priori reasoning places it behind the veil of ignorance by default.