If I understand right, the view youâre proposing is sort of like the âaverage viewâ of utilitarianism. The objective is to minimize the average level of suffering across a population.
A common challenge to this view (shared with average util) is that it seems you can make a world better by adding lives which suffer, but suffer less than the average. In some hypothetical hellscape where everyone is getting tortured, adding further lives where people get tortured slightly less severely should make the world even worse, not better.
Pace the formidable challenges of infinitarian ethics, I generally lean towards total views. I think the intuition you point to (which I think is widely shared) in that larger degrees of suffering should âmatter moreâ is perhaps better accommodated in something like prioritarianism, whereby improving the well-being of the least well off is given extra moral weight to its utilitarian âface valueâ. (FWIW, I generally lean towards pretty flat footed utilitarianism, as there some technical challenges with prioritarianism, and it seems hard to distinguish the empirical from the moral matters: there are evolutionary motivations (H/âT Carl Shulman) why there should be extremely severe pain, so maybe a proper utilitarian accounting makes relieving these extremes worth very large amounts of more minor suffering).
Aside: in population ethics thereâs a well-worn problem of aggregation, as suggested by the repugnant conclusion: lots and lots of tiny numbers when put together can outweigh a big numbers, so total views have challenges such as: âImagine A where 7 billion people live lives of perfect bliss, versus B where these people suffer horrendous torture, but TREE(4) people with lives that are only just barely worth livingâ. B is far better than A, yet it seems repulsive. (The usual total view move is to appeal to scope insensitivity and that our intuitions here are ill-suited to tracking vast numbers. I donât think perhaps more natural replies (e.g. âdiscount positive wellbeing if above zero but below some threshold close to itâ), come out in the wash).
Unfortunately, the âsuffering onlyâ suggested as a potential candidate in the FAQ (i.e. discount âpositive experiencesâ, and only work to reduce suffering) seems to compound these, as in essence one can concatenate these problems of population ethics with the counter-intuitiveness of this discounting of positive experience (virtually everyoneâs expressed and implied preferences indicate positive experiences have free-standing value, as they are willing to trade off between negative and positive).
The aggregation challenge akin to the repugnant conclusion (which I think I owe to Carl Shulman) goes like this. Consider A: 7 billion people suffering horrendous torture. Now consider B: TREE(4) people enjoying lifelong eudaimonic bliss with the exception of each suffering a single pinprick. On a total suffering view A >>> B, yet this seems common-sensically crazy.
The view seems to violate two intuitions, the first the aggregation issue (i.e. TREE(4) pinpricks is more morally important than 7 billion cases of torture), but also the discounting of positive experienceâthe âsuffering only counts viewâ is indifferent to the difference of TREE(4) instances of lifelong eudaimonic bliss between the scenarios. If we imagine world C where no one exists a total util view gets the intuitively ârightâ answer (i.e. B > C > A), whilst the suffering view gets most of the pairwise comparisons intuitively wrong (i.e. C > A > B)
If I understand right, the view youâre proposing is sort of like the âaverage viewâ of utilitarianism. The objective is to minimize the average level of suffering across a population.
A common challenge to this view (shared with average util) is that it seems you can make a world better by adding lives which suffer, but suffer less than the average. In some hypothetical hellscape where everyone is getting tortured, adding further lives where people get tortured slightly less severely should make the world even worse, not better.
Pace the formidable challenges of infinitarian ethics, I generally lean towards total views. I think the intuition you point to (which I think is widely shared) in that larger degrees of suffering should âmatter moreâ is perhaps better accommodated in something like prioritarianism, whereby improving the well-being of the least well off is given extra moral weight to its utilitarian âface valueâ. (FWIW, I generally lean towards pretty flat footed utilitarianism, as there some technical challenges with prioritarianism, and it seems hard to distinguish the empirical from the moral matters: there are evolutionary motivations (H/âT Carl Shulman) why there should be extremely severe pain, so maybe a proper utilitarian accounting makes relieving these extremes worth very large amounts of more minor suffering).
Aside: in population ethics thereâs a well-worn problem of aggregation, as suggested by the repugnant conclusion: lots and lots of tiny numbers when put together can outweigh a big numbers, so total views have challenges such as: âImagine A where 7 billion people live lives of perfect bliss, versus B where these people suffer horrendous torture, but TREE(4) people with lives that are only just barely worth livingâ. B is far better than A, yet it seems repulsive. (The usual total view move is to appeal to scope insensitivity and that our intuitions here are ill-suited to tracking vast numbers. I donât think perhaps more natural replies (e.g. âdiscount positive wellbeing if above zero but below some threshold close to itâ), come out in the wash).
Unfortunately, the âsuffering onlyâ suggested as a potential candidate in the FAQ (i.e. discount âpositive experiencesâ, and only work to reduce suffering) seems to compound these, as in essence one can concatenate these problems of population ethics with the counter-intuitiveness of this discounting of positive experience (virtually everyoneâs expressed and implied preferences indicate positive experiences have free-standing value, as they are willing to trade off between negative and positive).
The aggregation challenge akin to the repugnant conclusion (which I think I owe to Carl Shulman) goes like this. Consider A: 7 billion people suffering horrendous torture. Now consider B: TREE(4) people enjoying lifelong eudaimonic bliss with the exception of each suffering a single pinprick. On a total suffering view A >>> B, yet this seems common-sensically crazy.
The view seems to violate two intuitions, the first the aggregation issue (i.e. TREE(4) pinpricks is more morally important than 7 billion cases of torture), but also the discounting of positive experienceâthe âsuffering only counts viewâ is indifferent to the difference of TREE(4) instances of lifelong eudaimonic bliss between the scenarios. If we imagine world C where no one exists a total util view gets the intuitively ârightâ answer (i.e. B > C > A), whilst the suffering view gets most of the pairwise comparisons intuitively wrong (i.e. C > A > B)