Then, if you extend these comparisons to satisfy the independence of irrelevant alternatives by stating that in comparisons of multiple choices in an option set, all permissible options are strictly better than all impermissible options regardless of option set, extending these rankings beyond the option set, the result is antifrustrationism. To show this, you can use the set of the following three options, which are identical except in the ways specified:
and since B is impermissible because of the presence of A, this means C>B, and so it’s always better for a preference to not exist than for it to exist and not be fully satisfied, all else equal.
The “under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most 0.” was assumed for that last inference. I’ll update the intro to make this more explicit. Thanks!
This makes sense, but the type of things that tend to convince me to believe in an ethical theory generally depend a lot on how much I resonate with the main claims of the theory. When I look at the premises in this theorem, none of them seem to be type of things that I care about.
If you want to deal with moral uncertainty with credences, you could assign each of the 3 major assumptions an independent credence of 50%, so this argument would tell you should be utilitarian with credence at least 123=18=12.5%. (Assigning independent credences might not actually make sense, in case you have to deal with contradictions with other assumptions.)
On the other hand, pointing out that utilitarians care about people and animals, and they want them to be as happy as possible (and free, or with agency, desire satisfaction) that makes me happy to endorse the theory. When I think about all people and animals being happy and free from pain in a utilitarian world, I get a positive feeling.
Makes sense. For what it’s worth, this seems basically compatible with any theory which satisfies the Pareto principle, and I’d imagine you’d also want it to be impartial (symmetry). If you also assume real-valued utilities, transitivity, independence of irrelevant alternatives, continuity and independence of unconcerned agents, you get something like utilitarianism again. In my view, independence of unconcerned agents is doing most of the work here, though.
I want to point out that both assumptions 2, and 1 and 3 together have been objected to by academic philosophers.
Assumption 2 is ex post consequentialism: maximize the expected value of a social welfare function. Ex ante prioriatarianism/egalitarianism means rejecting 2: we should be fair to individuals with respect to their expected utilities, even if this means overall worse expected outcomes. This is, of course, vNM irrational, but Diamond defended it (and see my other comment here). Essentially, even if two outcomes are equally valuable, a probabilistic mixture of them can be more valuable because it gives people fairer chances; this is equality of opportunity. This contradicts the independence axiom specifically for vNM rationality (and so does the Allais paradox).
Assumptions 1 and 3 together are basically a weaker version of ex ante Pareto, according to which it’s (also) better to increase the expected utility of any individual(s) if it comes at no expected cost to any other individuals. Ex post prioritarianism/egalitarianism means rejecting the conjunction of 1 and 3, and ex ante Pareto: we should be more fair to individuals ex post (we want more fair actual outcomes after they’re determined), even if this means worse individual expected outcomes.
There was a whole issue of Utilitas devoted to prioritarianism and egalitarianism in 2012, and, notably, Parfit defended prioritarianism in it, arguing against ex ante Pareto (and hence the conjunction of 1 and 3):
When Rawls and Harsanyi appeal to their versions of Veil of Ignorance Contractualism, they claim that the Equal Chance Formula supports the Utilitarian Average Principle, which requires us to act in ways that would maximize average utility, by producing the greatest sum of expectable benefits per person. This is the principle whose choice would be rational, in self-interested terms, for people who have equal chances of being in anyone’s position.
We can plausibly reject this argument, because we can reject this version of contractualism. As Rawls points out, Utilitarianism is, roughly, self-interested rationality plus impartiality. If we appeal to the choices that would be rational, in self-interested terms, if we were behind some veil of ignorance that made us impartial, we would expect to reach conclusions that are, or are close to being, Utilitarian. But this argument cannot do much to support Utilitarianism, because this argument’s premises are too close to these conclusions. Suppose that I act in a way that imposes some great burden on you, because this act would give small benefits to many other people who are much better off than you. If you object to my act, I might appeal to the Equal Chance Formula. I might claim that, if you had equal chances of being in anyone’s position, you could have rationally chosen that everyone follows the Utilitarian Principle, because this choice would have maximized your expectable benefits. As Scanlon and others argue, this would not be a good enough reply.9 You could object that, when we ask whether some act would be wrong, we are not asking a question about rational self-interested choice behind a veil of ignorance. Acts can be wrong in other ways, and for other reasons.
He claimed that we can reject ex ante Pareto (“Probabilistic Principle of Personal Good”), in favour of ex post prioritarianism/egalitarianism:
Even if one of two possible acts would be expectably worse for people, this act may actually be better for these people. We may also know that this act would be better for these people if they are worse off. This fact may be enough to make this act what we ought to do.
Here, by “worse off” in the second sentence, he meant in a prioritarian/egalitarian way. The act is actually better for them, because the worse off people under this act are better off than the worse off people under the other act. He continued:
We can now add that, like the Equal Chance Version of Veil of Ignorance Contractualism, this Probabilistic Principle has a built-in bias towards Utilitarian conclusions, and can therefore be rejected in similar ways. According to Prioritarians, we have reasons to benefit people which are stronger the worse off these people are. According to Egalitarians, we have reasons to reduce rather than increase inequality between people. The Probabilistic Principle assumes that we have no such reasons. If we appeal to what would be expectably better for people, that is like appealing to the choices that it would be rational for people to make, for self-interested reasons, if they had equal chances of being in anyone’s position. Since this principle appeals only to self-interested or prudential reasons, it ignores the possibility that we may have impartial reasons, such as reasons to reduce inequality, or reasons to benefit people which are stronger the worse off these people are. We can object that we do have such reasons.
When Rabinowicz pointed out that, in cases like Four, Prioritarians must reject the Probabilistic Principle of Personal Good, he did not regard this fact as counting against the Priority View. That, I believe, was the right response. Rabinowicz could have added that similar claims apply to Egalitarians, and to cases like Two and Three.
Another one for bees: information integration across or generalization between senses.
“Bumble bees display cross-modal object recognition between visual and tactile senses” by Cwyn Solvi, Selene Gutierrez Al-Khudhairy and Lars Chittka.
Humans excel at mental imagery, and we can transfer those images across senses. For example, an object out of view, but for which we have a mental image, can still be recognized by touch. Such cross-modal recognition is highly adaptive and has been recently identified in other mammals, but whether it is widespread has been debated. Solvi et al. tested for this behavior in bumble bees, which are increasingly recognized as having some relatively advanced cognitive skills (see the Perspective by von der Emde and Burt de Perera). They found that the bees could identify objects by shape in the dark if they had seen, but not touched, them in the light, and vice versa, demonstrating a clear ability to transmit recognition across senses.
Many animals can associate object shapes with incentives. However, such behavior is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition—experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations.
I think if you believe the conditions of the theorem are all plausible or desirable and so give them some weight, then you should give the conclusion some weight, too.
For example, it’s unlikely to be the case that anyone’s ethical rankings actually satisfy the vNM rationality conditions in practice, but if you give any weight to the claims that we should have ethical rankings that are complete, continuous with respect to probabilities (which are assumed to work in the standard way), satisfy the independence of irrelevant alternatives and avoid all theoretical (weak) Dutch books, and also give weight to the combination of these conditions at once*, then the Dutch book results give you reason to believe you should satisfy the vNM rationality axioms, since if you don’t, you can get (weakly) Dutch booked in theory. I think you should be at least as sympathetic to the conclusion of a theorem as you are to the combination of all of its assumptions, if you accept the kind of deductive logic used in the proofs.
*I might be missing more important conditions.
I think this is an important point. People might want to start with additional or just different axioms, including, as you say, avoiding the repugnant conclusion, and if they can’t all together be consistent, then this theorem may unjustifiably privilege a specific subset of those axioms.
I do think this is an argument for utilitarianism, but more like in the sense of “This is a reason to be a utilitarian, but other reasons might outweigh it.” I think it does have some normative weight in this way.
Also, independence of irrelevant alternatives is safer to give up than transitivity, and might accomplish most of what you want. See my other comment.
1. Each individual in the group is rational (for a commonly used but technical definition of “rational”, hereafter referred to as “VNM-rational”)
2. The group as a whole is VNM-rational
3. If every individual in the group is indifferent between two options, then the group as a whole is indifferent between those two options
One way of motivating 3 is by claiming (in the idealistic case where everyone’s subjective probabilities match, including the probabilities that go with the ethical ranking):
a. Individual vNM utilities track welfare and what’s better for individuals, and not having it do so is paternalistic. We should trust people’s preferences when they’re rational since they know what’s best for themselves.
b. When everyone’s preferences align, we should trust their preferences, and again, not doing so is paternalistic, since it would (in principle) lead to choices that are dispreferred by everyone, and so worse for everyone, according to a.*
As cole_haus mentioned, a could actually be false, and a motivates b, so we’d have no reason to believe b either if a were false. However, if we use some other real-valued conception of welfare and claim what’s good for individuals is maximizing its expectation, then we could make an argument similar to b (replacing “dispreferred by everyone” with “worse in expectation for each individual”) to defend the following condition, which recovers the theorem:
3′. If for two options and for each individual in the options, their expected welfare is the same in the two options, then we should be ethically indifferent between the options.
*As alluded to here, if your ethical ranking of choices broke one of these ties so A≻B, it would do so with a real number-valued difference, and by the continuity axiom, you could probabilistically mix the choice A you broke the tie in favour of with any choice C that’s worse to everyone than the other choice B, and this could be made better than B according to your ethical ranking, i.e.pA+(1−p)C≻B for any p∈(0,1) close enough to 1, while everyone has the opposite preference over these two choices.
Why should morality be based on group decision-making principles? Why should I care about VNM rationality of the group?
I’ve retracted my previous reply. The original 2nd condition is different from ex ante Pareto; it’s just vNM rationality with respect to outcomes for social/ethical preferences/views and it says nothing about the relationship between individual preferences and social/ethical ones. It’s condition 3 that connects individual vNM utility and social/ethical vNM utility.
I think this last point essentially denies the third axiom above, which is what connects individual vNM utility and social/ethical preferences. (The original statement of the second axiom is just vNM rationality for social/ethical preferences, and has no relationship with the individuals’ preferences.)
I used preferences about restaurants as an example because that seemed like something people can relate to easily, but that’s just an example. The theorem is compatible with hedonic utilitarianism. (In that case, the theorem would just prove that the group’s utility function is the sum of each individual’s happiness.)
In this case, I think it’s harder to argue that we should care about ex ante expected individual hedonistic utility and for the 1st and 3rd axioms, because we had rationality based on preferences and something like Pareto to support these axioms before, but we could now just be concerned with the distribution of hedonistic utility in the universe, which leaves room for prioritarianism and egalitarianism. I think the only “non-paternalistic” and possibly objective way to aggregate hedonistic utility within an individual (over their life and/or over uncertainty) would be to start from individual preferences/attitudes/desires but just ignore concerns not about hedonism and non-hedonistic preferences, i.e. an externalist account of hedonism. Roger Crisp defends internalism in “Hedonism Reconsidered”, and defines the two terms this way:
Two types of theory of enjoyment are outlined-internalism, according to which enjoyment has some special ’feeling tone’, and externalism, according to which enjoyment is any kind of experience to which we take some special attitude, such as that of desire.
Otherwise, I don’t think there’s any reason to believe there’s an objective common cardinal scale for suffering and pleasure, even if there were a scale for suffering and a separate scale for pleasure. Suffering and pleasure don’t use exactly the same parts of the brain, and suffering isn’t just an “opposite” pattern to pleasure. Relying on mixed states, observing judgements when both suffering and pleasure are happening at the same time might seem promising, but these judgements happen at a higher level and probably wouldn’t be consistent between people, e.g. you could have two people with exactly the same suffering and pleasure subsystems, but with different aggregating systems.
I’m personally more sympathetic to externalism. With antifrustrationism (there are actually arguments for antifrustrationism; see also my comment here), externalism leads to a negative hedonistic view (which I discuss further here).
It doesn’t have to be the group, it can be an impartial observer with their own social welfare function, as long as it is increasing with individual expected utility, i.e. satisfies ex ante Pareto. Actually, that’s how it was originally stated.
EDIT: woops, condition 2 is weaker than ex ante Pareto; it’s just vNM rationality with respect to outcomes for social/ethical preferences/views. It’s condition 3 that connects individual vNM utility and social/ethical vNM utility.
I would actually say that 12(2,0)+12(0,2) being equivalent to (2,0) and (0,2) is in contradiction with equality of opportunity. In the first case, both individuals have an equal chance of being well-off (getting 2), but in the second and third, only one has any chance of being well-off, so the opportunities to be well-off are only equal in the first case (essentially the same objection to essentially the same case is made in “Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparison of Utility: Comment”, in which Peter Diamond writes “it seems reasonable for the individual to be concerned solely with final states while society is also interested in the process of choice”). This is what ex ante prioritarianism/egalitarianism is for, but it can lead to counterintuitive results. See the comments on that post, and “Decide As You Would With Full Information! An Argument Against Ex Ante Pareto” by Marc Fleurbaey & Alex Voorhoeve.
For literature on equality of outcomes and uncertainty, the terms to look for are “ex post egalitarianism” and “ex post prioritarianism” (or with the hyphen as “ex-post”, but I think Google isn’t sensitive to this).
Thanks for writing this!
I don’t think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example. Furthermore, if you don’t count the interests of people who exist until after they exist or unless they come to exist, it probably won’t look like total utilitarianism, although it gets more complicated.
You might be interested in Teruji Thomas’ paper “The Asymmetry, Uncertainty, and the Long Term” (EA Forum post here), which proves a similar result from slightly different premises, but is compatible with all of 1) ex post prioritarianism, 2) mere addition, 3) the procreation asymmetry, 4) avoiding the repugnant conclusion and 5) avoiding antinatalism, and all five of these all at the same time, because it sacrifices the independence of irrelevant alternatives (the claim that how you rank choices should not depend on what choices are available to you, not the vNM axiom). Thomas proposes beatpath voting to choose actions. Christopher Meacham’s “Person-affecting views and saturating counterpart relations” also provides an additive calculus which “solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox” and satisfies the asymmetry, also by giving up the independence of irrelevant alternatives, but hasn’t, as far as I know, been extended to deal with uncertainty.
I’ve also written about ex ante prioritarianism in the comments on the EA Forum post about Thomas’ paper, and in my own post here (with useful feedback in the comments).
Some discussion here, too.
Might not really matter now given her chances, but she did an interview with VegNews:
For me, deciding to be vegetarian is rooted in a very strong spiritual foundation as a practicing Hindu—and an awareness and a care and compassion for all living beings. So, more recently, in the last few years—just as I became more aware of the unethical treatment of animals in the dairy industry especially—it caused me to really think about some of the changes I could make to lessen that negative impact on animals as well as the environment.
VN: Switching gears, what changes do you would want to see for animals legally? TG: Factory farms have to be a thing of the past. Throughout the time I’ve spent in Iowa, we’ve seen the horrifying ways animals are treated in these farms and the incredible, ravaging impact that it has on the communities where these farms are located. Supporting more ethical and organic farming has to be the place that we go when it comes to farming. Ending animal testing. Ending the inhumane treatment of animals, whether it is for cosmetic purposes or other purposes. Science is showing us that even for those kinds of testing that may be required, there’s absolutely no reason or justification for this to continue to occur in the use of animals. We need to ban puppy mills. These commercial breeding factories full of animals that don’t put an emphasis on animals’ well-being—and really is a purely profit-driven, greed-based business—is leading to more dogs who are just actually in need of homes, and filling up shelters and ending up in a very terrible situation. I think another one is a huge issue—but not maybe striking a chord with everyone because people are not aware of it—is ending the trophy hunting that’s happening, and making it so that it is not a cultural norm that we accept in this society. There’s a long list of things we need to do, but I think these are at the top of the list.
VN: What about culturally and societally? In what ways do you want to see our relationships to animals shift? TG: When people talk about their dogs as their best friends, or the cats in their house, or the horses that they have on their ranch … I would love to see that same kind of relationship that people have with their animals extended to all animals. That you’ve got to respect animals. That you know and understand that animals have incredible feelings and emotions and, just as our dogs are happy to see us when we come home, we need to understand and appreciate that relationship with all animals and respecting them as sentient beings that are like us. They are a very integral part of our ecosystem.
Some discussion here, too, in the context of introducing s-risks:
One important point I think worth highlighting about the numbers is their differential growth rates. That is, for instance, not only are there many more farmed fish than pigs or cows but the annual increase in the number of farmed fish is much greater than that for pigs or cows
Agreed that this is very important. The scale of a problem should be defined to include (your projections for) its total over time that you think your actions could influence. Relatively few animals could be used in a given country now, but because of expected growth, the scale could actually be huge, and our cost-effectiveness estimates should take such projections into account.
Monte Carlo simulations independently performed by Warren Smith and Jameson Quinn generally find that approval voting has higher VSE than instant-runoff voting, and that both approval voting and instant-runoff voting have much higher VSE than plurality voting.
A priori, I think this could end up being quite sensitive to the distributions of votes they used. Did they choose them based on surveys/polls of voter preferences?