Partial Aggregation’s Utility Monster

Link post

Author’s Note: this is a somewhat older post of mine, but I’m interested to get more feedback/​interaction related to it. Both because it is fairly philosophically dense which is closest to what I hope to do more professional work in (I would like to better understand both how strong my points are, and how legible to someone who doesn’t read philosophy papers much it is), and because I think aggregation is one of the most important topics in ethics in general, which has been too neglected by EAs as a rule, but which might get more discussion for instance in the wake of Andreas Mogensen’s recent 80k interview. Other similarly dense issues like population ethics seem to have gotten much more discussion, but I think population ethics is less fundamental to EA than aggregation itself, and frankly its bullets are in my opinion softer. This specific piece gets into one specific problem with one specific framing of one specific solution to aggregation, which makes it a little niche in interest perhaps, but might at least serve as a small contribution to keeping the topic on EAs’ radar. That said I think the views I discuss are perhaps the most popular broad approaches, both to what is wrong with pure aggregation, and the overall shape of how one should fix it.

It is hard for me to imagine many practical ethics parallels to this specific thought experiment, but I want to try a bit, and I can imagine at least one – a way someone might use similar reasoning to defend factory farms. I have debated at least one person who thinks that animal welfare should matter, but that humans gain enough pleasure from eating animal products, that factory farming is still justified. This seems on its face like a ludicrous argument to me on any reasonable assumptions about the relevant interests. Non-humans outnumber humans quite a bit at any given time, and each animal’s life is extremely miserable in each moment. Vegan lives just don’t seem like they can possibly be that much worse than omnivore lives. A response that is possible, if one wanted to rescue an argument like this, is that the welfare humans get from animal products on average in each moment is quite a bit less significant than the torment of the animals, but that the humans live much longer lives than each individual animal. I find this dubious, but one could appeal to this to argue that the per individual interest of each person in eating animal products is greater in aggregate than the per-individual interest of each animal in not being born into a factory farm. One might further argue, I think even more dubiously, that the difference between these interests is great enough, that a partial aggregationist should tolerate any number of typical factory farmed animal lives in exchange for any number of typical humans eating the products of these farms. Even if this math doesn’t work on existing farms, if you could shorten the lives of each animal enough, even at some cost in both number of lives and severity of suffering in each moment, it would work out at some point. My post can be seen as a recreation of the basic logic behind this point that shows its extreme repugnance in human cases.

My point of greatest uncertainty with this write up is the separateness of persons part. Although I have often seen appeals to the separateness of persons that look like the asymmetry I describe, I have not read much dedicated, article length work on the separateness of persons, and am personally unsympathetic to it. Given this I should concede both that there are appeals to the separateness of persons that are not framed as being about aggregation, but rather as being about more conventional side constraints, and that there might be interpretations of separateness of persons in the context of aggregation that escape some element of my critique. I discuss some such interpretations later in my piece. Given this, you can translate “separateness of persons” in the context of my thought experiment as “we determine benefits to individuals using aggregation, but then can determine the distribution of these benefits between people in some non-aggregative way.”

Finally, as with my other recent pieces, I want to preface with a summary of the structure of this article in case it improves reader experience:

Part I: in this part, I describe pure aggregation, partial aggregation, and separateness of persons, and some initial dilemmas related to each, in order to set up the thought experiment of the article.

Part II: I ratchet up to my main thought experiment by first describing the counter-intuitive consequences of pure aggregation within a single life, and then comparing two different lives with the same extremes of aggregation applied within each, and then applying partial aggregation to this comparison, in order to get my final thought experiment.

Part III: I describe some ways to try to escape this implication with much of the starting premises intact, and why all of these seem inadequate to me, and conclude with my own thoughts on how to deal with such dilemmas.

Author’s Note: this post is based on my final paper, “Troubles in Grounding Principles of Partial Aggregation”

I.

I recently wrote a paper for class on problems with the class of theories attempting to formalize “partial aggregation” as described by philosophers like Alex Voorhoeve and Victor Tadros. I thought most of the paper was a bit of a mess, I had a hard time communicating and clarifying my points, and in the process of working on them, I came up with many objections to specific points I made that I didn’t have time to flesh out responses for. There was one thought experiment in it, however, that got a positive reception from pretty much everyone I showed it to, I think because it presents an apparently devastating reductio against the set of assumptions it is derived from. I decided it warranted its own write-up, independent of the less successful elements of the paper.

So, first to introduce the concepts involved in this thought experiment. I have written about pure aggregation and its issues before, but to briefly recap, aggregation is the principle that, in moral terms, there is something comparable between one person experiencing two units of pain and two people each experiencing one unit of pain (you can also have an aggregative theory that places different significance on pain depending on its level, such as prioritarianism, so in those cases it is better thought of as two people each experiencing one unit of intrinsic disvalue versus one person experiencing two units of intrinsic disvalue, even if that corresponds to something more like one unit of pain versus 1.5 units of pain). This can get notoriously counterintuitive in cases in which there is aggregative equivalence between two pools you could help, and the difference in benefits one could give each is very very large, as for instance described in this famous post on torture versus dustspecks.

Conventional deontology doesn’t fix this issue either, unless you subscribe to an incredibly extreme version of deontology, under which beneficence has no moral importance, and all side-constraint violations are equally bad. Otherwise, you will still run into situations in which you can choose to benefit one of two pools in a way that violates no side-constraints, and may have to choose aggregatively, or in which you can be judged to be at more fault or less fault depending on how severe of a constraint you violated and for how many people. What is needed to “fix” aggregation, as a rule, is some distributive theory that competes with it in these cases directly.

Fully non-aggregative theories aren’t very appealing either though. Pure leximin, for instance, has the apparently terrible implication that if person A is being tortured for a million years and person B is being tortured for 999,999 years, you should prefer to save A from one second of their torture than person B from all of their torture. Versions of this that look at how much you can benefit someone rather than how badly off someone is avoid this implication, but still wind up being counterintuitive. For instance if you have rented out heaven for a million years, and you can either send all of humanity there after one year of waiting, or send Bob, and only ever Bob to heaven, but can do so immediately, you ought to send Bob to heaven for a million years rather than waiting to send everyone there for 999,999 years.

And that is where partial aggregation comes in. Partially aggregative theories appear to give you everything you want in these situations. If you can benefit the members of each group a similar amount, you aggregate claims to decide which group to help. If the groups have claims that are too far apart, you always help the one with the stronger per-individual claim.

The other principle crucial to my thought experiment is the “separateness of persons”. Opposition to pure aggregation by non-consequentialists often appeals to the principle of the “separateness of persons”. That is, aggregation may make sense within a life, because any trade-off between parts of that life will be directly experienced by the same subject (you may suffer pain now for the benefit of some later point in your life, but that is alright because the you who experiences this pain will also get to experience that benefit). The supposed confusion of the utilitarian is to apply this between lives, which do not have the same subject of experience, and only have one shot at having a good existence. Where it is justified to trade-off within a life it is not necessarily justified to trade-off between lives. I believe this was most influentially spelled out by John Rawls in chapter I part 5 of “A Theory of Justice”, although it has been highly influential on other opponents of utilitarian aggregation with contractualist leanings, Voorhoeve’s account of partial aggregation at least explicitly appeals to the “separateness of persons” as part of its grounding.

Both of these positions, the separateness of persons and partial aggregation, have some basic issues in my opinion. Theories of partial aggregation for instance have straight-forwardly appealing implications in cases where:

  1. You can choose between benefiting two different groups.

  2. The strength of the claims of each individual in a particular group is roughly the same as the others in that group.

  3. You can provide this benefit with relative certainty.

4.You are only making this choice once, or at least you are focusing on the application of this principle to one decision in isolation.

Ambiguities and bizarre implications can be proven for pretty much any theory of partial aggregation I have seen so far by taking the theory out of one of these conditions in the right way. I won’t go through these arguments, but I think Derek Parfit (by tweaking premise 1) and Joe Horton (by tweaking premise 2) in particular have highlighted especially strong problems for partial aggregation. And yet, partial aggregationists like Voorhoeve and Tadros have seemed very willing to bite what I see as devastating bullets in each case highlighted by opponents. I take this, at least in part, to be because the answer partial aggregation chooses in the case it works best for (1-4 being the case) is so desirable compared to the alternatives.

Separateness of persons, likewise, seems to me to have serious problems. One of the clearest, as William MacAskill has mentioned, is that it is just not clearly true, and if it is false, then it does seem to undermine many commonsense non-consequentialist principles. If it is true on the other hand, it is not clear that it automatically rules out utilitarianism. It would just undermine its competitors less. A somewhat more novel critique of it I have is that, it seems to me, it does not actually account for our intuitions against aggregation. As I hope to show, by only partially addressing these intuitions, it gives a fingerhold to pull it in some highly repugnant directions.

II.

I will allow that it feels more immediately intuitive that it is wrong to tradeoff a large number of irritated eyes for intense torture than that it is imprudent to cure a very long-lived person’s occasionally irritated eyes with a procedure that induces days of similar torment. And yet, I think it is very intuitive that the latter case is still quite repugnant. Insofar as people find it less repugnant, I think that is largely because non-consequentialists often feel you shouldn’t force someone to be prudent. Therefore the practical relevance of this idea of prudence is fairly academic if the person in question does not actually want to undergo this torturous procedure. This is an area where I have found many people, including opponents of utilitarian aggregation, are ready to bite the bullet, and say that aggregation, even extreme aggregation, can determine what is beneficial within a life. I think this is a mistake, or at least it is a mistake to treat this as significantly less repugnant at the extremes than interpersonal aggregation.

To draw on a thought experiment I mentioned in an earlier talk, imagine we developed a “pain-killer” that spread out a given experience of pain over a very very long time, at a barely noticeable level of increased pain in each moment. I contend that to most, this would be intuitively an effective pain killer, one someone would be happy to use during a painful surgery for instance. I further contend that if such a pain killer had somewhat less risk of complications than a more conventional anesthetic, it could become the standard prescription in such cases, considered better even if it doesn’t alter the aggregate pain at all. I still further contend, that any philosopher who wrote a thinkpiece about this painkiller arguing that it was no good, that it wasn’t even really a painkiller, and that it was imprudent to choose it over the conventional anesthetic for these reasons, would be viewed by most people as laughably academic.

I want to emphasize here that I am not trying to prove that the aggregative answer is wrong in this case, but merely that we are uncomfortable with intrapersonal aggregation, not merely interpersonal aggregation. I think this on its own should cast some doubt on whether it is really the interpersonal aspect of the aggregation that bothers us in the situations partial aggregation seeks to address, and so whether “separateness of persons” is a good premise for anti-aggregation intuitions. But again, I think this is a bullet many could see themselves biting, especially since it really does feel like there is at least some difference.

So let’s keep on pushing on this asymmetry a bit. Imagine combining the intrapersonal and interpersonal elements. Let’s say that there is some extremely long-lived being, who has a minor eye problem. Something like having slightly blurry vision for a few minutes after waking up each morning. You are a doctor, and you have a serum that can either cure this being’s condition for the rest of their life, or cure another patient from enduring a condition that induces unceasing, torture-level suffering for days. There is some length of time this long-lived being could live for (to put a number on it, let’s just a say a trillion years), such that you ought to give the serum to them rather than the tortured patient.

This seems to me even more repugnant, and yet conventional separateness of persons doesn’t seem to recognize it as a problem, and many who reject torture versus dust specks have philosophical commitments that would imply you ought to give the serum to the long-lived being. If you can bite even this bullet however, and are a principled enemy of aggregation, partial aggregation makes this case far far worse. There is some factor of difference between the severity of cases such that, under partial aggregation, no number of cases of the lesser severity can outweigh the claim of the greater severity. Again, let’s put some arbitrary number on it, say a factor of 100 difference. Those who have stuck with separateness of persons and partial aggregation so far seem to be committed to the claim that, if this being lived for 100 trillion years, you ought to help them rather than infinitely many people with the torture condition.

More speculatively, it might get even worse than this. In the paper I wrote for class, I spend some time thinking about how partial aggregation might relate to classic deontic side-constraints, like those related to act/​omission and intention. I concluded that, at the very least, it seems like even a deontologist interested in partial aggregation should allow that sometimes the conclusion of partial aggregation should be preferred over the deontic constraints, that it is highly intuitive that we should not only choose to help the torture-ee over the dustspeck-ees, but that we should be willing to personally inflict the dustspecks if it will stop the torture. I sketch out a possible way of doing this I think a partial aggregationist should find appealing, which first of all applies partial aggregation to the violation of these side constraints, and second of all allows these constraints to be traded off in a partially aggregative way against benefits. For instance, there is some number of people you could personally maim that would be worse than killing one person, but it may be that there is no number of white lies you can tell that would be worse than killing one person, and likewise, there is no number of white lies you shouldn’t tell if it will save someone’s life.

If we accept this, we might not accept a straightforward exchange rate, so let’s add in another factor of 1,000 to be safe, and extend the being’s life to 100 quadrillion years. If I am right about all of this, about what features partial aggregation is generally thought to have, along with which ones it seems highly intuitive that it ought to, then it may turn out that grounding it on or even just combining it with “separateness of persons” leads you to the following conclusion:

If it will spare a being with a long-but-finite lifespan from slightly blurry eyes in the morning, we ought to personally torture infinitely many people, unceasingly, for weeks.

I think anyone looking at this conclusion in isolation would agree that utilitarianism has no utility monster a fraction as horrible as this.

This conclusion is very repugnant, but the cost that it pays is that it does not fall out of a single popular theory, but leaves many angles for escape. I still think that it shows something important that the combination of some commonsense and popular anti-aggregation principles appears to allow for or even outright imply a conclusion that seems worse than standard objections to extreme aggregation. It goes beyond the strange structures and weird implications Parfit and Horton highlighted and shows that there are bullets that are unpleasant, not merely absurd, to bite in this space. But I want to go through the ways someone could differ slightly from my version of these principles in order to save a good deal, without being committed to my conclusions.

III.

The obvious thing that bears mentioning first is the deontic side-constraint thing. This is a principle that I only briefly defended, made up for the purposes of my paper, and then in this same piece showed can make a bad situation worse. Although I think there are important dilemmas for partial aggregation that it addresses, and so people hoping to develop the theory further will have to contend with how partial aggregation juggles helping others with not violating side constraints, I will admit this feature is very suspicious on the meta-level. So much so, that for the purposes of the rest of this post, I will concede it and only consider objections to the weaker form of the partial aggregation utility monster (PA monster from here on), in which one can choose between saving the long-lived being or the tortured people. Maybe consider the stronger version a sort of sidenote, saying that the PA monster situation can plausibly get even worse for a non-consequentialist.

Another escape route, this from the weaker version of the PA monster, is to in some way reorient how you treat these cases in terms of person-moments rather than persons. This seems to me to involve the rejection of “separateness of persons”, which would be highly revisionary for some, but there are some ways to do this structurally while preserving a version of separateness of persons.

One possibility is to concede that separateness of persons only works against the extremes of interpersonal aggregation, but that nevertheless there is something else wrong with the extremes of intrapersonal aggregation, which allows you to treat the claim of the long-lived being as weaker than a strong, briefer interest. It is true that separateness of persons does not seem to commit you to allowing aggregation within a life, but neither does it commit you to denying it. If you think what is wrong with pure aggregation in ethics is the separateness of persons, it seems suspicious to me that you need an entirely different principle to fix the repugnant implications of intrapersonal aggregation. In particular, if you take the framing of the separateness of persons to be a diagnosis of some mistake that utilitarians make, as Rawls at least frames it, this seems to me to directly imply that treating a group of people like one person’s life would lend support for aggregating across them, and so, by implication, that aggregating within a life makes much more sense than aggregating across lives.

The other way of trying to hang onto separateness of persons is to interpret it a bit differently. Say that the point of separateness of persons is just that there is a difference between how we can decide prudential decisions (aggregation) and how we can decide ethical decisions. It is true that prudent decisions are intrapersonal, and ethical decisions are interpersonal, but that doesn’t mean that you can make intrapersonal judgements using the same assumptions as prudence when you are in an ethical situation; the context itself changes things. There is even some apparent precedent to this type of distinction. As I mentioned, it is commonsense that while it may be prudent to do something that will make you happy, like making friends or getting married, if you are trying to help someone else (that is, you are making decisions in the ethical context), forcing someone to make friends and get married is not ethical. Likewise, you might say that it is prudent for the long lived being to undergo torture in order stop their eye problems, but that does not mean that it is ethical to help them with this eye problem rather than the torture-ee.

Although it seems like there is at least some precedent for this type of interpretation, I don’t think it is actually a very promising route either. It is true that we consider forcing someone to be prudent to be unethical, but otherwise, when we are doing something a person wants, we generally consider how good that something is for them to correspond roughly to how much ethical weight helping them has. If someone is undergoing five units of suffering they want to stop, it is prudent for them to escape the suffering, and it is ethically valuable to help them escape that suffering, and it seems as though this is for most of the same reasons. Some compelling principled exception like the issue of consent seems to be needed to reformulate separateness of persons such that it opposes both the interpersonal and intrapersonal aggregation in the ethical decision.

A final plausible escape route would be to limit partial aggregation. That is, you could say that there is both a feature of “relevance” and “seriousness” that is part of the partial aggregation equation. If all benefits you can provide are sufficiently “serious”, then they all automatically become “relevant”, and you can return to pure aggregation. This would say, for example, dustspecks are not a serious harm, therefore a different harm that is sufficiently worse than it (like torture) can render the claim of the dustspecks irrelevant. However, if a harm is very serious (like the torture), then no matter how much stronger the claim competing with it is, it still remains relevant. This would structurally look something like, there is no number of dustspecks that could outweigh torture, but there is some number of tortures that could outweigh supersupersupertorture.

This could solve the PA monster, because even if you concede that the long-lived being has a much much stronger claim than each person undergoing torture, torture may be serious enough that it isn’t rendered irrelevant by partial aggregation. This modification has some intuitively appealing features, but there are a couple reasons that I don’t like it.

For one thing, although it doesn’t entail the infinite number of people being tortured implication of the PA monster, it still seems to concede a great deal. The previous version of the thought experiment, in which you are choosing between the torture-ee and the trillion-year lifespan of the being, still gives you the implication that you should save the long-lived being. Indeed, in the versions where I extended the life to reach the infinitely many tortures point, it will still scale up the number of torture-ees. The 100 trillion year lifespan corresponds to 100 people undergoing torture, the 100 quadrillion year lifespan corresponds to 100 thousand people undergoing torture. It’s true that in all of these situations the modified version of partial aggregation gives you the same answer as pure aggregation, while fixing some specific unpleasant cases like torture versus dustspecks, but this feels inconsistent.

If the PA monster relied on an asymmetry in aggregation being pushed over a cliff to crazyville, this version of PA still allows for the root asymmetry, and some of its strangeness. Imagine the strongest claim someone can have that does not cross this “seriousness” threshold where pure aggregation kicks in no matter what, say a broken leg or something. This modified version partial aggregation will still have the fairly intuitive implication that there is some harm serious enough, say supersupersupertorture, such that no number of broken legs matters more than it, while retaining the implication that you ought to save the trillion year-old being from the fuzzy eyes rather than infinitely many broken legs. It all feels like sandpapering the issue down rather than getting at its root.

The other problem with this approach is that, I think, it only seems like an appealing fix because we are incapable of imagining and forming intuitions about something so unspeakably terrible, that it stands in relation to torture as torture stands in relation to dustspecks. If we were able to imagine this, not just something that adds up to it in aggregate, but something that is, uncontroversially, in the moment, this bad, I think those attracted to partial aggregation might want their irrelevance criterion back.

In the end, I believe the thing to do with this thought experiment is to ditch separateness of persons as the reason against aggregation, and to then restructure partial aggregation so that it concerns person-moments rather than persons. This is not the only thing I think partial aggregation should do (it is usually framed non-axiologically, I think mostly Norcross’ doing, but I think this also proves too little of the relevant intuition), but it is maybe the most revisionary.

I also tend to think that more general critiques of partial aggregation are right, and, per the title of Horton’s piece, we probably are just forced to “always aggregate”. As I have discussed before, this sometimes seems highly repugnant to me, but I see little way around it. It is also beyond the scope of this particular piece. My own takeaway however, for the record, is something like this. The right moral principle is probably (with plurality credence) the one that chooses to prevent the dustspecks over the torture, but this principle nonetheless cannot convince me in this particular case. I would not, in fact, act on or even endorse the conclusion myself. But more generally, I think I do endorse pure aggregation. Make of that set of views what you will.

No comments.