To answer on the level of imagery and associations rather than trying to make a strong philosophical argument:
The Repugnant Conclusion makes me think of the dire misery of extremely poor places, like Haiti or Congo. People in extreme poverty are often malnourished, they have to put up with health problems and live in terrible conditions. On top of all those miseries, they have to get through it all with very limited education / access to information, and very limited freedom / agency in life. (But I agree with jackmalde that their lives are nevertheless worth living vs nonexistence—I would still prefer to live if I was in their situation.)
Compared to an Earth with 10 Billion people living at developed-world standards, it just seems crazy to me that anyone would prefer a world with, say, 1 Trillion people eking out their lives in a trash-strewn Malthusian wasteland. The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, explore, learn, and change.
This image leads to various wacky political objections, which are not philosophically relevant since nobody said the Repugnant Conclusion was supposed to apply to the actual situation of Earth in 2021 (as opposed to, say, a hypothetical comparison between 10 Billion rich people vs 3^^^3 lives barely worth living). But emotionally and ideologically, the Repugnant Conclusion brings to mind appropriately aversive images like:
That EA should pivot away from interventions like GiveDirectly or curing diseases, and instead become all about boosting birthrates in whatever way possible. (New cause area: “Family Disempowerment Media”?)
That things like the invention of the birth control pill and the broader transition away from strict pro-fertility hierarchical gender norms (starting in the industrial revolution) were some of the worst events in history.
That almost all human values (art, love, etc) should be sacrificed in favor of supporting a higher total carrying capacity of optimized pure replicators, a la the essay “Meditations on Moloch”.
So, in the practical world, the idea that humanity should aim to max out the Earth’s carrying capacity without regard to quality-of-life seems insane, so the Repugnant Conclusion will therefore always seem like a bizarre idea totally opposed to ordinary moral reasoning, even if it’s technically correct when you use sufficiently big numbers.
Separately from all the above, I also feel that there would be an extreme “samey-ness” to all of these barely-worth-living lives. It seems farfetched to me that you are still adding moral value linearly when you create the quadrillionth person to complete your low-quality-of-life population—how could their repetitive overlapping experiences match up to the richness and diversity of qualia experienced by a smaller crew of less-deprived humans?
Thanks, this is one of my favourite responses here. I appreciated your sharing your mental imagery and listing out some consequences of that imagery. I think I am more inclined than you to say that many people alive today have lives not worth living, but you address confusion about that point in another comment. And while I’m more pro-hedonium than you I also wonder about “tiling” issues.
Do your intuitions about this stay consistent if you reverse the ordering? That is, as I think another comment on this post said elsewhere, if you start with a large population of just-barely-happy people, and then replace them with a much smaller population of very happy people, does that seem like a good trade to you?
Yes, my intuition stays the same if the ordering is reversed; population A seems better than population Z and that’s that.
(For instance, if the population of an isolated valley had grown so much, and people had subdivided their farmland, to the point that each plot of land was barely enough for subsistence and the people regularly suffered conflict and famine, in most situations I would think it good if those people voluntarily made a cultural change towards having fewer children, such that over a few generations the population would reduce to say 1⁄3 the original level, and everyone had enough buffer that they could live in peace with plenty to eat and live much happier lives. Of course I would have trouble “wishing people into nonexistence” depending on how much the metaphysical operation seemed to resemble snuffing out an existing life… I would always be inclined to let people live out their existing lives.)
Furthermore, I could even be tempted into accepting a trade of Population A (whose lives are already quite good, much better than barely-worth-living) for a utility-monster style even-smaller population of extremely good lives. But at this point I should clarify that although I might be a utilitarian, I am not a “hedonic” utilitarian and I find it weird that people are always talking about positive emotional valence of experience rather than a more complex basket of values. I already mentioned how I value diversity of experience. I also highly value something like intelligence or “developedness of consciousness”:
It seems silly to me that the ultimate goal becomes Superhappy states of incredible joy and ecstasy. Perhaps this is a failure of my imagination, since I am incapable of really picturing just how good Superhappy states would be. Or perhaps I have cultural blinders that try to ward me off of wireheading (via drug addiction, etc) by indoctrinating me to believe statements like “life isn’t all about happiness; being connected to reality & other people is important, and having a deep understanding the universe is better than just feeling joyful”.
Imagine the following choice: “Take the blue pill and you’ll experience unimaginable joy for the rest of your life (not just one-note heroin-esque joy, but complex joy that cycles through the multiple different shades of positive feeling that the human mind can experience). Take the red pill, and you’ll experience a massive increase in the clarity of your consciousness together with a gigantic boost in IQ to superhuman levels, allowing you to have many complex experiences that are currently out of reach for you, just like how rats are incapable of using language, understanding death, etc. But despite all those revelations, your net happiness level will be totally similar to your current life.” Obviously the joy has its appeal—both are great options! -- but I would take the red pill.
Although I care about the suffering of animals like chimps and factory-farmed chickens and would incorporate it into my utilitarian calculus, I also think that there is a sense in which no number of literal-rats-on-heroin could fully substitute for a human. If you offered me to trade 1 human life for creating a planet with 1 quadrillion rats on heroin, I’d probably take that deal over and over for the first few thousand button-presses. But I wouldn’t just keep going until Earth ran out of people, because I’d never trade away the last complex, intelligent human life to just get one more planet of blissed-out lower life forms.
By contrast, I’d have far fewer qualms going the other way, and trading Earth’s billions of humans for a utopian super-civilization with mere millions of super-enhanced, godlike transhuman intelligences.
Even with my basket of Valence + Diversity-of-experience + Level-of-consciousness, I still expect that utilitarianism of any kind is more like a helpful guide for doing cost-benefit calculations than a final moral theory where we can expect all its assumptions (for instance that moral value scales absolutely linearly forever when you add more lives to the pile) to robustly hold in extreme situations. I think this belief is compatible with being very very utilitarian compared to most ordinary people—just like how I believe that GDP growth is an imperfect proxy for what we want from our civilization, but I am still very very pro economic growth moreso than ordinary people.
“The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change.”
If you’re a total utilitarian, you don’t care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world.
Certainly. Some of those values I mentioned might be counted as direct forms of utility, and some might be counted as necessary means to the end of greater total utility later. And the repugnant conclusion can always win by turning up the numbers a bit and making Population Z’s lives pretty decent compared to the smaller Population A.
Partially I am just trying to describe the imagery that occurs to me when I look at the “population A vs population Z” diagram.
I guess I am also using the repugnant conclusion to point out a complaint I have against varieties of utilitarianism that endorse stuff like “tiling the universe with rats on heroin”. To me, once you start talking about very large populations, diversity of experiences is just as crucial as positive valence. That’s because without lots of diversity I start doubting that you can add up all the positive valence without double-counting. For example, if you showed me a planet filled with one million supercomputers all running the exact same emulation of a particular human mind thinking a happy thought, I would be inclined to say, “that’s more like one happy person than like a million happy people”.
I have the same feeling. I have an aversion to utility tiling as you describe it but I can’t exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again.
Yes, I guess it would’ve been more accurate to say “I’m one of those confused people jackmalde was referring to, who intellectually thinks that very deprived lives are still worth living but nevertheless feels uncomfortable and conflicted about the obvious logical implications of that.”
Potential sources of this conflictedness:
Maybe my mental picture of a deprived but barely worth-it life is cartoonishly exaggerated in its badness. Poor people that I have met IRL in rural India did not have the best lives, but most of them were basically happy such that it does seem like a moral boon rather than repugnant to imagine creating trillions of others like them.
Maybe I am still having difficulty extricating myself from practical/political concerns. In the real world, if a new continent magically appeared full of many new barely-worth-living people, we would feel morally obligated to share with them and help improve their lives. This is a good instinct which is at the core of EA itself, but the inevitability of this empathetic response does mean that the appearance of new large barely-worth-it populations seems like a threat to the ongoing wellbeing of Population A. But of course in the thought experiment the populations are totally separate.
I am definitely (and understandably) uncertain about how to figure what kind of life is barely worth living. I am strongly anti-death to a greater extent than you are in your comment, but even I would not endorse things like “tortured forever” as being necessarily better than nothing, so I do want to set a threshold somewhere. (But again maybe this is just political concerns and my own personal spoiledness?? If I was God deciding whether to create the universe, and it was either going to be torture-hell or no universes whatsoever, maybe I’d create hell rather than there being nothing at all. But if I got to create a normal happy universe first, I’d definitely stick with happy universe plus nothing else rather than happy + hell.) On the other hand, the “creation test” seems suspicious to me—wouldn’t everyone just benchmark off their own quality of life? I’d be happy to create educated rich-world citizens, but immortal cyberhumans from the 23rd century would probably say that life isn’t worth creating if you’re not immortal and at the very least 50% cyber.
To answer on the level of imagery and associations rather than trying to make a strong philosophical argument: The Repugnant Conclusion makes me think of the dire misery of extremely poor places, like Haiti or Congo. People in extreme poverty are often malnourished, they have to put up with health problems and live in terrible conditions. On top of all those miseries, they have to get through it all with very limited education / access to information, and very limited freedom / agency in life. (But I agree with jackmalde that their lives are nevertheless worth living vs nonexistence—I would still prefer to live if I was in their situation.)
Compared to an Earth with 10 Billion people living at developed-world standards, it just seems crazy to me that anyone would prefer a world with, say, 1 Trillion people eking out their lives in a trash-strewn Malthusian wasteland. The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, explore, learn, and change.
This image leads to various wacky political objections, which are not philosophically relevant since nobody said the Repugnant Conclusion was supposed to apply to the actual situation of Earth in 2021 (as opposed to, say, a hypothetical comparison between 10 Billion rich people vs 3^^^3 lives barely worth living). But emotionally and ideologically, the Repugnant Conclusion brings to mind appropriately aversive images like:
That EA should pivot away from interventions like GiveDirectly or curing diseases, and instead become all about boosting birthrates in whatever way possible. (New cause area: “Family Disempowerment Media”?)
That things like the invention of the birth control pill and the broader transition away from strict pro-fertility hierarchical gender norms (starting in the industrial revolution) were some of the worst events in history.
That almost all human values (art, love, etc) should be sacrificed in favor of supporting a higher total carrying capacity of optimized pure replicators, a la the essay “Meditations on Moloch”.
So, in the practical world, the idea that humanity should aim to max out the Earth’s carrying capacity without regard to quality-of-life seems insane, so the Repugnant Conclusion will therefore always seem like a bizarre idea totally opposed to ordinary moral reasoning, even if it’s technically correct when you use sufficiently big numbers.
Separately from all the above, I also feel that there would be an extreme “samey-ness” to all of these barely-worth-living lives. It seems farfetched to me that you are still adding moral value linearly when you create the quadrillionth person to complete your low-quality-of-life population—how could their repetitive overlapping experiences match up to the richness and diversity of qualia experienced by a smaller crew of less-deprived humans?
Thanks, this is one of my favourite responses here. I appreciated your sharing your mental imagery and listing out some consequences of that imagery. I think I am more inclined than you to say that many people alive today have lives not worth living, but you address confusion about that point in another comment. And while I’m more pro-hedonium than you I also wonder about “tiling” issues.
Do your intuitions about this stay consistent if you reverse the ordering? That is, as I think another comment on this post said elsewhere, if you start with a large population of just-barely-happy people, and then replace them with a much smaller population of very happy people, does that seem like a good trade to you?
Yes, my intuition stays the same if the ordering is reversed; population A seems better than population Z and that’s that. (For instance, if the population of an isolated valley had grown so much, and people had subdivided their farmland, to the point that each plot of land was barely enough for subsistence and the people regularly suffered conflict and famine, in most situations I would think it good if those people voluntarily made a cultural change towards having fewer children, such that over a few generations the population would reduce to say 1⁄3 the original level, and everyone had enough buffer that they could live in peace with plenty to eat and live much happier lives. Of course I would have trouble “wishing people into nonexistence” depending on how much the metaphysical operation seemed to resemble snuffing out an existing life… I would always be inclined to let people live out their existing lives.)
Furthermore, I could even be tempted into accepting a trade of Population A (whose lives are already quite good, much better than barely-worth-living) for a utility-monster style even-smaller population of extremely good lives. But at this point I should clarify that although I might be a utilitarian, I am not a “hedonic” utilitarian and I find it weird that people are always talking about positive emotional valence of experience rather than a more complex basket of values. I already mentioned how I value diversity of experience. I also highly value something like intelligence or “developedness of consciousness”:
It seems silly to me that the ultimate goal becomes Superhappy states of incredible joy and ecstasy. Perhaps this is a failure of my imagination, since I am incapable of really picturing just how good Superhappy states would be. Or perhaps I have cultural blinders that try to ward me off of wireheading (via drug addiction, etc) by indoctrinating me to believe statements like “life isn’t all about happiness; being connected to reality & other people is important, and having a deep understanding the universe is better than just feeling joyful”.
Imagine the following choice: “Take the blue pill and you’ll experience unimaginable joy for the rest of your life (not just one-note heroin-esque joy, but complex joy that cycles through the multiple different shades of positive feeling that the human mind can experience). Take the red pill, and you’ll experience a massive increase in the clarity of your consciousness together with a gigantic boost in IQ to superhuman levels, allowing you to have many complex experiences that are currently out of reach for you, just like how rats are incapable of using language, understanding death, etc. But despite all those revelations, your net happiness level will be totally similar to your current life.” Obviously the joy has its appeal—both are great options! -- but I would take the red pill.
Although I care about the suffering of animals like chimps and factory-farmed chickens and would incorporate it into my utilitarian calculus, I also think that there is a sense in which no number of literal-rats-on-heroin could fully substitute for a human. If you offered me to trade 1 human life for creating a planet with 1 quadrillion rats on heroin, I’d probably take that deal over and over for the first few thousand button-presses. But I wouldn’t just keep going until Earth ran out of people, because I’d never trade away the last complex, intelligent human life to just get one more planet of blissed-out lower life forms.
By contrast, I’d have far fewer qualms going the other way, and trading Earth’s billions of humans for a utopian super-civilization with mere millions of super-enhanced, godlike transhuman intelligences.
Even with my basket of Valence + Diversity-of-experience + Level-of-consciousness, I still expect that utilitarianism of any kind is more like a helpful guide for doing cost-benefit calculations than a final moral theory where we can expect all its assumptions (for instance that moral value scales absolutely linearly forever when you add more lives to the pile) to robustly hold in extreme situations. I think this belief is compatible with being very very utilitarian compared to most ordinary people—just like how I believe that GDP growth is an imperfect proxy for what we want from our civilization, but I am still very very pro economic growth moreso than ordinary people.
“The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change.”
If you’re a total utilitarian, you don’t care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world.
Certainly. Some of those values I mentioned might be counted as direct forms of utility, and some might be counted as necessary means to the end of greater total utility later. And the repugnant conclusion can always win by turning up the numbers a bit and making Population Z’s lives pretty decent compared to the smaller Population A.
Partially I am just trying to describe the imagery that occurs to me when I look at the “population A vs population Z” diagram.
I guess I am also using the repugnant conclusion to point out a complaint I have against varieties of utilitarianism that endorse stuff like “tiling the universe with rats on heroin”. To me, once you start talking about very large populations, diversity of experiences is just as crucial as positive valence. That’s because without lots of diversity I start doubting that you can add up all the positive valence without double-counting. For example, if you showed me a planet filled with one million supercomputers all running the exact same emulation of a particular human mind thinking a happy thought, I would be inclined to say, “that’s more like one happy person than like a million happy people”.
I have the same feeling. I have an aversion to utility tiling as you describe it but I can’t exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again.
You have misunderstood my comment. Perhaps I have not been clear enough. Feel free to have another read and I would be happy to answer any questions.
Yes, I guess it would’ve been more accurate to say “I’m one of those confused people jackmalde was referring to, who intellectually thinks that very deprived lives are still worth living but nevertheless feels uncomfortable and conflicted about the obvious logical implications of that.”
Potential sources of this conflictedness:
Maybe my mental picture of a deprived but barely worth-it life is cartoonishly exaggerated in its badness. Poor people that I have met IRL in rural India did not have the best lives, but most of them were basically happy such that it does seem like a moral boon rather than repugnant to imagine creating trillions of others like them.
Maybe I am still having difficulty extricating myself from practical/political concerns. In the real world, if a new continent magically appeared full of many new barely-worth-living people, we would feel morally obligated to share with them and help improve their lives. This is a good instinct which is at the core of EA itself, but the inevitability of this empathetic response does mean that the appearance of new large barely-worth-it populations seems like a threat to the ongoing wellbeing of Population A. But of course in the thought experiment the populations are totally separate.
I am definitely (and understandably) uncertain about how to figure what kind of life is barely worth living. I am strongly anti-death to a greater extent than you are in your comment, but even I would not endorse things like “tortured forever” as being necessarily better than nothing, so I do want to set a threshold somewhere. (But again maybe this is just political concerns and my own personal spoiledness?? If I was God deciding whether to create the universe, and it was either going to be torture-hell or no universes whatsoever, maybe I’d create hell rather than there being nothing at all. But if I got to create a normal happy universe first, I’d definitely stick with happy universe plus nothing else rather than happy + hell.) On the other hand, the “creation test” seems suspicious to me—wouldn’t everyone just benchmark off their own quality of life? I’d be happy to create educated rich-world citizens, but immortal cyberhumans from the 23rd century would probably say that life isn’t worth creating if you’re not immortal and at the very least 50% cyber.