Most views in population ethics can entail weird/​intuitively toxic conclusions (cf. the large number of’X conclusion’s out there). Trying to weigh these up comparatively are fraught.
In your comparison, it seems there’s a straightforward dominance argument if the ‘OC’ and ‘RC’ are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as ‘large increase in suffering for one individual’ can be outweighed by a large enough number of smaller decreases in suffering for others—aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.
There’s also the worry in a pairwise comparison one might inadvertently pick a counterexample for one ‘side’ that turns the screws less than the counterexample for the other one. Most people find the ‘very repugnant conclusion’ (where not only Z > A, but ‘large enough Z and some arbitrary number having awful lives > A’) even more costly than the ‘standard’ RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.
By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.
It seems the main engine of RC-like examples is the aggregation—it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness—most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all—yet often we’re inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of ‘upside’.
So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn’t outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).
Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC—by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia—Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn’t discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.
(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and—depending how you litigate—susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there’s never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).
I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/​lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).
But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/​)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for ‘like-for-like’: e.g. perhaps a ‘negative plus a lexical threshold’ view is more palatable than classical util, yet CLU would fare even better than either.
Thanks for such a detailed and insightful response Gregory.
Your archetypal classical utilitarian is also committed to the OC as ‘large increase in suffering for one individual’ can be outweighed by a large enough number of smaller decreases in suffering for others—aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.
Thanks for pointing this out. I think I realised this extra bullet biting after making the post.
There’s also the worry in a pairwise comparison one might inadvertently pick a counterexample for one ‘side’ that turns the screws less than the counterexample for the other one.
This makes a lot of sense, and not something I’d considered at all and seems pretty important when playing counterxample-intuition-tennis.
By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.
Again, this feels really useful and something I want to think about further.
The typical worry of the (absolute) negative view itself is it fails to price happiness at all—yet often we’re inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of ‘upside’.
I think my slight negative intuition comes from that fact that although I may be willing to endure some suffering for some upside, I wouldn’t endorse inflicting suffering (or risk or suffering) on person A for some upside for person B. I don’t know how much work the differences of fairness personal identity (i.e. the being that suffered gets the upside) between the examples are doing, and it what direction my intuition is ‘less’ biased.
Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC—by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia—Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas
I like this example a lot! and definitely lean A > Z.
Reframing the situation, and my intuition becomes less clear: considering A’, in which TREE(TREE(3))) lives are in perfect bliss, but there are also TREE(TREE(3))) beings that monetarily experience a single pinprick before ceasing to to exist. This is clearly equivalent to A in the axiology but my intuition is less clear (if at all) that A’ > Z. As above, I’m unsure how much work personal identity is doing. In my mind, I find population ethics easier to think about by considering ‘experienced moments’ rather than individuals.
(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) …
Thanks for pointing out the error. I think think I’m right in saying that the ‘welfare capped by 0’ axiology is non-anti-egalitarian, which I conflated with absolute NU in my post (which is anti-egalitarian as you say). The axiologies are much more distinct than I originally thought.
Most views in population ethics can entail weird/​intuitively toxic conclusions (cf. the large number of’X conclusion’s out there). Trying to weigh these up comparatively are fraught.
In your comparison, it seems there’s a straightforward dominance argument if the ‘OC’ and ‘RC’ are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as ‘large increase in suffering for one individual’ can be outweighed by a large enough number of smaller decreases in suffering for others—aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.
There’s also the worry in a pairwise comparison one might inadvertently pick a counterexample for one ‘side’ that turns the screws less than the counterexample for the other one. Most people find the ‘very repugnant conclusion’ (where not only Z > A, but ‘large enough Z and some arbitrary number having awful lives > A’) even more costly than the ‘standard’ RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.
By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.
It seems the main engine of RC-like examples is the aggregation—it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness—most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all—yet often we’re inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of ‘upside’.
So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn’t outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).
Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC—by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia—Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn’t discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.
(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and—depending how you litigate—susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there’s never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).
I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/​lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).
But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/​)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for ‘like-for-like’: e.g. perhaps a ‘negative plus a lexical threshold’ view is more palatable than classical util, yet CLU would fare even better than either.
Thanks for such a detailed and insightful response Gregory.
Thanks for pointing this out. I think I realised this extra bullet biting after making the post.
This makes a lot of sense, and not something I’d considered at all and seems pretty important when playing counterxample-intuition-tennis.
Again, this feels really useful and something I want to think about further.
I think my slight negative intuition comes from that fact that although I may be willing to endure some suffering for some upside, I wouldn’t endorse inflicting suffering (or risk or suffering) on person A for some upside for person B. I don’t know how much work the differences of fairness personal identity (i.e. the being that suffered gets the upside) between the examples are doing, and it what direction my intuition is ‘less’ biased.
I like this example a lot! and definitely lean A > Z.
Reframing the situation, and my intuition becomes less clear: considering A’, in which TREE(TREE(3))) lives are in perfect bliss, but there are also TREE(TREE(3))) beings that monetarily experience a single pinprick before ceasing to to exist. This is clearly equivalent to A in the axiology but my intuition is less clear (if at all) that A’ > Z. As above, I’m unsure how much work personal identity is doing. In my mind, I find population ethics easier to think about by considering ‘experienced moments’ rather than individuals.
Thanks for pointing out the error. I think think I’m right in saying that the ‘welfare capped by 0’ axiology is non-anti-egalitarian, which I conflated with absolute NU in my post (which is anti-egalitarian as you say). The axiologies are much more distinct than I originally thought.