Person-affecting theories. I find them unlikely, and I also don’t think they address the actual “repugnant conclusion” question. Just can just change “you can create population X” to statements like, “Imagine that population X exists, and you are asked about killing them.”
Killing people would be bad for them under some person-affecting accounts, and probably the kind that most people with person-affecting views hold: their lifetime aggregate welfares will be lower than otherwise (if they would have otherwise had good futures), or you’ll frustrate preferences, and we can think either is bad even if they won’t experience the deprivation or frustration.
I’m curious about what you mean by not addressing the actual question. Person-affecting views avoid making each of the tradeoffs in your sequence. But there’s still a fundamental problem of aggregation, which doesn’t go away by taking a person-affecting view, e.g. you can fix a huge population, and consider
tiny benefits or harms, one each to a huge number of people vs
large benefits or harms each to a much smaller set of people.
E.g. Scanlon’s transmitter room,[1]Yudkowsky’s torture vs dust specks, Spears and Budolfson, 2021. To avoid this, I think you’d need to take a view that gives up full aggregation, and so gives up either transitivity or the independence of irrelevant alternatives. (Person-affecting views also typically give up transitivity, the independence of irrelevant alternatives or completeness/full comparability.)
Transmitter Room. The World Cup final is currently being played. Jones, a technician in the room containing the equipment that is causing the game’s worldwide television broadcast, has inadvertently come into contact with some exposed wires that are causing him very painful electric shocks. He is unable to extricate himself from his situation, but you can help him by turning off the machine with the exposed wires. Unfortunately, if you do this, then the World Cup broadcast will be shut down, and it won’t be able to be restarted for 10 minutes.
(Person-affecting views also typically give up transitivity, the independence of irrelevant alternatives or completeness/full comparability.)
These views seem quite strange to me. I’d be curious to understand who these people are that believe this. Are these views common among groups of Effective Altruists, or philosophers, or perhaps other groups?
I’d guess person-affecting intuitions are common (like at least a substantial minority), including among EAs, but I’d also guess most people with them don’t have very specific views worked out, and working out person-affecting theories intuitive even to those with person-affecting intuitions seems hard, e.g. my post here and this one (although see also responses in the comments). It’s probably easier for some than others, depending on their other intuitions.
Person-affecting intuitions and views are probably more common among people with more contractualist (e.g. Finneron-Burns, 2017) or Kantian leanings.
A couple posts defending person-affecting intuitions, mostly the procreation asymmetry,[1] have been well-received on the EA Forum:
I would say I have asymmetric person-affecting views. This post kind of describes where I’m at, ignoring the asymmetry. (And I’m working on another post.)
Although the procreation asymmetry is compatible with negative utilitarianism, which isn’t really person-affecting at all, and doesn’t violate transitivity, IIA or completeness.
> I’m curious about what you mean by not addressing the actual question.
I just meant that my impression was that person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically. I imagine that many person-affecting believers would agree with this. Or, I assume that it’s very possible to do any combination of [strongly care about the repugnant conclusion] | [not care about it], and [have person-affecting views] and [not have them].
The (very briefly explained) example I mentioned is meant as something like, Say there’s a trolly problem. You could either accept scenario (A): 100 people with happy lives are saved, or (B) 10000 people with sort of decent lives are saved.
My guess was that this would still be an issue in many person-affecting views (I might well be wrong here though, feel free to correct me!). To me, this question is functionally equivalent to the Repugnant Conclusion.
Your examples with aggregation also seem very similar.
Just a guess, but I think many people who reject the repugnant conclusion in its original form would be happy to save far more people with less good but positive lives, over less people with better lives. Recall the recent piece on bioethicists where lots of them don’t even think you have more reason to save the life of a 20-year old than a 70-year old. Or consider how offensive it is to say “let’s save the lives of people in rich countries, all things being equal, because their lives will likely contain less suffering”. In general, people seem to reject the idea that the size of the benefit conveyed on someone by saving their life affects how strong the reason to save their life is, so long as their remaining life will be net positive and something like a “normal” human life, and isn’t ludicrously short. (Note: I’m not defending this position, I think you should obviously save a 20-year old over a 70-year old because the benefit to them is so much larger.)
On the other hand, most of these people would probably save a few humans over many more animals, which is kind of like rejecting the repugnant conclusion in a life-saving rather than life-creating context.
I just meant that my impression was that person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically. I imagine that many person-affecting believers would agree with this. Or, I assume that it’s very possible to do any combination of [strongly care about the repugnant conclusion] | [not care about it], and [have person-affecting views] and [not have them].
The (very briefly explained) example I mentioned is meant as something like, Say there’s a trolly problem. You could either accept scenario (A): 100 people with happy lives are saved, or (B) 10000 people with sort of decent lives are saved.
My guess was that this would still be an issue in many person-affecting views (I might well be wrong here though, feel free to correct me!). To me, this question is functionally equivalent to the Repugnant Conclusion.
I’m pretty confident you’re wrong about this. (Edit: I mean, you’re right if you call it “repugnant conclusion” whenever we talk about choosing between a small very happy population and a sufficiently larger less happy one; however, my point is that it’s no coincidence that people most often object to favoring the larger population over the smaller one in contexts of population ethics, i.e., when the populations are not already both in existence.) I’ve talked to a lot of suffering-focused EAs. Of the people who feel strongly about rejecting the repugnant conclusion in population ethics, at best only half feel that aggregation is altogether questionable. More importantly, even in those that feel that aggregation is altogether questionable, I’m pretty sure that’s a separate intuition for them (and it’s only triggered when we compare something as mild as dust specks to extremes like torture). Meaning, they might feel weird about “torture vs dustspecks,” but they’ll be perfectly okay with “there comes a point where letting a trolley run over a small paradise is better than letting it run over a sufficiently larger population of less happy (but still overall happy) people on the other track.” By contrast, the impetus of their reaction to the original repugnant conclusion comes from the following. When they hear a description of “small-ish population with very high happiness,” their intuition goes “hmm, that sounds pretty optimal,” so they’re not interested in addingcosts just to add more happiness moments (or net happy lives) to the total.
To pass the Ideological Turing test for most people who don’t want to accept the repugnant conclusion, you IMO have to engage with the intuition that it isn’t morally important to create new happy people. (This is also what person-affecting views try to build on.)
I haven’t done explicit surveys of this, but I’m still really confident that I’m right about this being what non-totalists in population ethics base their views on, and I find it strange that pretty much* every time totalists discuss the repugnant conclusion, they don’t seem to see this.
(For instance, I’ve pointed this out here on the EA forum at least once to Gregory Lewis and Richard Yetter-Chappell (so you’re in good company, but what is going on?))
*For an exception,this post by Joe Carlsmisth doesn’t mention the repugnant conclusion directly, but it engages with what I consider to be more crux-y arguments and viewpoints in relation to it.
>I’ve talked to a lot of suffering-focused EAs. Of the people who feel strongly about rejecting the repugnant conclusion in population ethics, at best only half feel that aggregation is altogether questionable.
I think this is basically agreeing with my point on “person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically”, in that it’s possible to have any combination.
That said, you do make it sound like suffering-focused people have a lot of thoughtful and specific views on this topic.
My naive guess would have been that many suffering-focused total utilitarians would simply have a far higher bar for what the utility baseline is than, say, classical total utilitarians. So in some cases, perhaps they would consider most groups of “a few people living ‘positive’ lives” to still be net-suffering, and would therefore just straightforwardly prefer many options with fewer people. But I’d also assume that in this theory, the repugnant conclusion would basically not be an issue anyway.
I realize that this wasn’t clear in my post, but when I wrote it, it wasn’t with suffering-focused people in mind. My impression is that the vast majority of people worried about the Repugnant Conclusion are not suffering focused, and would have different thoughts on this topic and counterarguments. I think I’m fine not arguing against the suffering-focused people on this topic, like the ones you’ve mentioned, because it seems like they’re presenting different arguments than the main ones I disagree with.
Killing people would be bad for them under some person-affecting accounts, and probably the kind that most people with person-affecting views hold: their lifetime aggregate welfares will be lower than otherwise (if they would have otherwise had good futures), or you’ll frustrate preferences, and we can think either is bad even if they won’t experience the deprivation or frustration.
I’m curious about what you mean by not addressing the actual question. Person-affecting views avoid making each of the tradeoffs in your sequence. But there’s still a fundamental problem of aggregation, which doesn’t go away by taking a person-affecting view, e.g. you can fix a huge population, and consider
tiny benefits or harms, one each to a huge number of people vs
large benefits or harms each to a much smaller set of people.
E.g. Scanlon’s transmitter room,[1] Yudkowsky’s torture vs dust specks, Spears and Budolfson, 2021. To avoid this, I think you’d need to take a view that gives up full aggregation, and so gives up either transitivity or the independence of irrelevant alternatives. (Person-affecting views also typically give up transitivity, the independence of irrelevant alternatives or completeness/full comparability.)
These views seem quite strange to me. I’d be curious to understand who these people are that believe this. Are these views common among groups of Effective Altruists, or philosophers, or perhaps other groups?
I’d guess person-affecting intuitions are common (like at least a substantial minority), including among EAs, but I’d also guess most people with them don’t have very specific views worked out, and working out person-affecting theories intuitive even to those with person-affecting intuitions seems hard, e.g. my post here and this one (although see also responses in the comments). It’s probably easier for some than others, depending on their other intuitions.
Person-affecting intuitions and views are probably more common among people with more contractualist (e.g. Finneron-Burns, 2017) or Kantian leanings.
A couple posts defending person-affecting intuitions, mostly the procreation asymmetry,[1] have been well-received on the EA Forum:
Critique of MacAskill’s “Is It Good to Make Happy People?” by Magnus Vinding (high karma, many votes)
Population Ethics Without Axiology: A Framework by Lukas_Gloor (one of the top prize winners for the EA Criticism and Red Teaming Contest)
Also some discussion in Confused about “making people happy” vs. “making happy people” and the comments.
I would say I have asymmetric person-affecting views. This post kind of describes where I’m at, ignoring the asymmetry. (And I’m working on another post.)
Although the procreation asymmetry is compatible with negative utilitarianism, which isn’t really person-affecting at all, and doesn’t violate transitivity, IIA or completeness.
> I’m curious about what you mean by not addressing the actual question.
I just meant that my impression was that person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically. I imagine that many person-affecting believers would agree with this. Or, I assume that it’s very possible to do any combination of [strongly care about the repugnant conclusion] | [not care about it], and [have person-affecting views] and [not have them].
The (very briefly explained) example I mentioned is meant as something like,
Say there’s a trolly problem. You could either accept scenario (A): 100 people with happy lives are saved, or (B) 10000 people with sort of decent lives are saved.
My guess was that this would still be an issue in many person-affecting views (I might well be wrong here though, feel free to correct me!). To me, this question is functionally equivalent to the Repugnant Conclusion.
Your examples with aggregation also seem very similar.
The repugnant trolley! Love it.
Just a guess, but I think many people who reject the repugnant conclusion in its original form would be happy to save far more people with less good but positive lives, over less people with better lives. Recall the recent piece on bioethicists where lots of them don’t even think you have more reason to save the life of a 20-year old than a 70-year old. Or consider how offensive it is to say “let’s save the lives of people in rich countries, all things being equal, because their lives will likely contain less suffering”. In general, people seem to reject the idea that the size of the benefit conveyed on someone by saving their life affects how strong the reason to save their life is, so long as their remaining life will be net positive and something like a “normal” human life, and isn’t ludicrously short. (Note: I’m not defending this position, I think you should obviously save a 20-year old over a 70-year old because the benefit to them is so much larger.)
On the other hand, most of these people would probably save a few humans over many more animals, which is kind of like rejecting the repugnant conclusion in a life-saving rather than life-creating context.
I’m pretty confident you’re wrong about this. (Edit: I mean, you’re right if you call it “repugnant conclusion” whenever we talk about choosing between a small very happy population and a sufficiently larger less happy one; however, my point is that it’s no coincidence that people most often object to favoring the larger population over the smaller one in contexts of population ethics, i.e., when the populations are not already both in existence.)
I’ve talked to a lot of suffering-focused EAs. Of the people who feel strongly about rejecting the repugnant conclusion in population ethics, at best only half feel that aggregation is altogether questionable. More importantly, even in those that feel that aggregation is altogether questionable, I’m pretty sure that’s a separate intuition for them (and it’s only triggered when we compare something as mild as dust specks to extremes like torture). Meaning, they might feel weird about “torture vs dustspecks,” but they’ll be perfectly okay with “there comes a point where letting a trolley run over a small paradise is better than letting it run over a sufficiently larger population of less happy (but still overall happy) people on the other track.” By contrast, the impetus of their reaction to the original repugnant conclusion comes from the following. When they hear a description of “small-ish population with very high happiness,” their intuition goes “hmm, that sounds pretty optimal,” so they’re not interested in adding costs just to add more happiness moments (or net happy lives) to the total.
To pass the Ideological Turing test for most people who don’t want to accept the repugnant conclusion, you IMO have to engage with the intuition that it isn’t morally important to create new happy people. (This is also what person-affecting views try to build on.)
I haven’t done explicit surveys of this, but I’m still really confident that I’m right about this being what non-totalists in population ethics base their views on, and I find it strange that pretty much* every time totalists discuss the repugnant conclusion, they don’t seem to see this.
(For instance, I’ve pointed this out here on the EA forum at least once to Gregory Lewis and Richard Yetter-Chappell (so you’re in good company, but what is going on?))
*For an exception,this post by Joe Carlsmisth doesn’t mention the repugnant conclusion directly, but it engages with what I consider to be more crux-y arguments and viewpoints in relation to it.
Thanks for that explanation.
>I’ve talked to a lot of suffering-focused EAs. Of the people who feel strongly about rejecting the repugnant conclusion in population ethics, at best only half feel that aggregation is altogether questionable.
I think this is basically agreeing with my point on “person-affecting views seem fairly orthogonal to the Repugnant Conclusion specifically”, in that it’s possible to have any combination.
That said, you do make it sound like suffering-focused people have a lot of thoughtful and specific views on this topic.
My naive guess would have been that many suffering-focused total utilitarians would simply have a far higher bar for what the utility baseline is than, say, classical total utilitarians. So in some cases, perhaps they would consider most groups of “a few people living ‘positive’ lives” to still be net-suffering, and would therefore just straightforwardly prefer many options with fewer people. But I’d also assume that in this theory, the repugnant conclusion would basically not be an issue anyway.
I realize that this wasn’t clear in my post, but when I wrote it, it wasn’t with suffering-focused people in mind. My impression is that the vast majority of people worried about the Repugnant Conclusion are not suffering focused, and would have different thoughts on this topic and counterarguments. I think I’m fine not arguing against the suffering-focused people on this topic, like the ones you’ve mentioned, because it seems like they’re presenting different arguments than the main ones I disagree with.