One approach I was expecting someone to try here, but haven’t seen, is trying to motivate the intuition at a smaller scale – e.g. comparing a small number of very happy people to a large-but-easily-imaginable number of slightly happy people.
If the intuitions underlying aversion to the Repugnant Conclusion only kick in for extremely large populations, then I’m more confidently inclined to say they are a mistake arising from an inability to imagine at that scale. But given that the original argument for the RC is based on infinite regress, it seems like the issues that make people averse to it should start to kick in much sooner. But most commenters here have focused entirely on the vast-population case.
I thought my first answer already did what you’re asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn’t tied to people who would exist anyway being worse off? I added another answer.
The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:
Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a “vague” threshold.
Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
Person-affecting.
Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.
This is a fair point. For what it’s worth, I do honestly think a world of 10 people with utopian lives (of normal length) is better than a world with 10 billion people with lives like the ones I described in my answer. I guess it depends on the details of “utopian”—seems plausible that for me and many others to endorse this claim, such lives need not be so imaginably awesome that a classical utilitarian would agree the total utility of the 10 billion population world is worse.
One approach I was expecting someone to try here, but haven’t seen, is trying to motivate the intuition at a smaller scale – e.g. comparing a small number of very happy people to a large-but-easily-imaginable number of slightly happy people.
If the intuitions underlying aversion to the Repugnant Conclusion only kick in for extremely large populations, then I’m more confidently inclined to say they are a mistake arising from an inability to imagine at that scale. But given that the original argument for the RC is based on infinite regress, it seems like the issues that make people averse to it should start to kick in much sooner. But most commenters here have focused entirely on the vast-population case.
I thought my first answer already did what you’re asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn’t tied to people who would exist anyway being worse off? I added another answer.
The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:
Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a “vague” threshold.
Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
Person-affecting.
Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.
See also:
https://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon
This is a fair point. For what it’s worth, I do honestly think a world of 10 people with utopian lives (of normal length) is better than a world with 10 billion people with lives like the ones I described in my answer. I guess it depends on the details of “utopian”—seems plausible that for me and many others to endorse this claim, such lives need not be so imaginably awesome that a classical utilitarian would agree the total utility of the 10 billion population world is worse.