I think welfare-based benificience, impartiality and at least limited aggregation do the most important work of thin utilitarianism, and I donât think you need additivity or that any harm can be outweighed by a large enough sum of tiny benefits, so that we should allow someone to be electrocuted in an accident to avoid interrupting a show a very large crowd is enjoying.
âLimited aggregationâ allows you to say that two people suffering is worse than one and make some tradeoffs between numbers and severity without very small changes in welfare aggregating across separate individuals to outweigh large changes. âLimited aggregationâ is a term in the literature, and I think it usually requires giving up the independence of irrelevant alternatives.
Almost all social welfare functions that satisfy the independence of irrelevant alternatives allow small changes to outweigh large changes. That includes non-additive but aggregative social welfare functions. See Spears and Budolfson:
Itâs obvious that utilitarianism does this. Consider also maximin. Maximin requires you to focus entirely on the worst off individual (or individuals, if there are ties). This might seem good because it means preventing the worst states, but it also means even preferring to prevent a tiny harm to the worst off (or a worse off) over bringing someone down to their level of welfare. E.g., one extra pin prick to someone being tortured anyway outweighs the (only very slightly less bad) torture of someone who wouldnât have otherwise been tortured. More continuous versions of maximin, like moderate tradeoff view/ârank-discounted utilitarianism, have the same implications in some cases, which will depend on the numbers involved.
Limited aggregation allows you to make some intuitive tradeoffs without extreme prioritization like maximin or allowing tiny harms to aggregate outweigh large harms.
On the other hand, there are views that reject the independence of irrelevant alternatives but donât allow any aggregation at all, and require you to minimize the greatest individual loss in welfare (not maximize the worst off state or maximize the welfare of the worst off individual, like maximin). This doesnât allow enough tradeoffs either, in my view. Scanlon the contractualist and Tom Regan the deontological animal rights theorist endorsed such a principle, as âthe greater burden principleâ and âthe harm principleâ, respectively. Maybe also the animal advocate Richard Ryder, with his âpainismâ, unless that is just a form of maximin.
Thanks, Michael. This is what Iâve been looking for. Iâll check out your links. I tend to agree with Ryder, although I donât know how thorough his framework is. Thanks again. PS: Hey Michael, those links were interesting. Do you have a good link to go into more about âlimited aggregationâ? Thanks, -Matthew Michael
I understand that you lose a lot (and I appreciate your blog posts). But that is not an argument that additivity is correct. As Iâve written for my upcoming book:
Imagine a universe that has only two worlds, World R and World FL. In World R, Ricky the Rooster is the only sentient being, and is suffering in an absolutely miserable life.
This is bad. But where is it bad? In Rickyâs consciousness. And nowhere else.
On World FL, Rooster Foghorn is living in one forest and Rooster Leghorn is living in a separate forest. They are the World FLâs only sentient beings, and donât know each other. Their lives are as bad as Rickyâs.
Our natural response is to think that World FL is twice as bad as World R. But where could it possibly be twice as bad? Foghornâs life is bad in his consciousness and nowhere else. Leghornâs life is bad in his consciousness and nowhere else.
Where is their world twice as bad as Rickyâs?
Nowhere.
Okay, yes, I admit it is twice as bad in your mind and my mind. But we are not part of that universe. Imagine that these worlds are unknown to any other sentient being. Then there is simply nowhere that World FL is worse than World R.
In this universe, there are three worlds and only three worlds: one in each of their minds.
Tell me where I am factually wrong. Please, Iâm asking you. My life would be much easier and happier if you would.
Donât say that the implications of this insight leads to absurd conclusions that offend our intuitions. I already know that! Just tell me where am I factually wrong.
I know (oh, yes, I know) that this seems like it canât possibly be right. This is because we canât help but be utilitarian in this regard, just like we canât help but feel like we are in control of our consciousness and our decisions and our choices.
But I can see no way around this simple fact: morally-relevant âbadnessâ exists only in individual consciousnesses.
I think welfare-based benificience, impartiality and at least limited aggregation do the most important work of thin utilitarianism, and I donât think you need additivity or that any harm can be outweighed by a large enough sum of tiny benefits, so that we should allow someone to be electrocuted in an accident to avoid interrupting a show a very large crowd is enjoying.
Michael, this is kinda what Iâm looking for. What does âlimited aggregationâ mean /â do in your case.
Sorry I didnât see this until now.
âLimited aggregationâ allows you to say that two people suffering is worse than one and make some tradeoffs between numbers and severity without very small changes in welfare aggregating across separate individuals to outweigh large changes. âLimited aggregationâ is a term in the literature, and I think it usually requires giving up the independence of irrelevant alternatives.
Almost all social welfare functions that satisfy the independence of irrelevant alternatives allow small changes to outweigh large changes. That includes non-additive but aggregative social welfare functions. See Spears and Budolfson:
https://ââphilpapers.org/âârec/ââBUDWTR
http://ââwww.stafforini.com/ââdocs/ââSpears & BudolfsonâRepugnant conclusions.pdf
Itâs obvious that utilitarianism does this. Consider also maximin. Maximin requires you to focus entirely on the worst off individual (or individuals, if there are ties). This might seem good because it means preventing the worst states, but it also means even preferring to prevent a tiny harm to the worst off (or a worse off) over bringing someone down to their level of welfare. E.g., one extra pin prick to someone being tortured anyway outweighs the (only very slightly less bad) torture of someone who wouldnât have otherwise been tortured. More continuous versions of maximin, like moderate tradeoff view/ârank-discounted utilitarianism, have the same implications in some cases, which will depend on the numbers involved.
Limited aggregation allows you to make some intuitive tradeoffs without extreme prioritization like maximin or allowing tiny harms to aggregate outweigh large harms.
On the other hand, there are views that reject the independence of irrelevant alternatives but donât allow any aggregation at all, and require you to minimize the greatest individual loss in welfare (not maximize the worst off state or maximize the welfare of the worst off individual, like maximin). This doesnât allow enough tradeoffs either, in my view. Scanlon the contractualist and Tom Regan the deontological animal rights theorist endorsed such a principle, as âthe greater burden principleâ and âthe harm principleâ, respectively. Maybe also the animal advocate Richard Ryder, with his âpainismâ, unless that is just a form of maximin.
Thanks, Michael. This is what Iâve been looking for. Iâll check out your links.
I tend to agree with Ryder, although I donât know how thorough his framework is.
Thanks again.
PS: Hey Michael, those links were interesting. Do you have a good link to go into more about âlimited aggregationâ?
Thanks,
-Matthew Michael
I understand that you lose a lot (and I appreciate your blog posts). But that is not an argument that additivity is correct. As Iâve written for my upcoming book:
Imagine a universe that has only two worlds, World R and World FL. In World R, Ricky the Rooster is the only sentient being, and is suffering in an absolutely miserable life.
This is bad. But where is it bad? In Rickyâs consciousness. And nowhere else.
On World FL, Rooster Foghorn is living in one forest and Rooster Leghorn is living in a separate forest. They are the World FLâs only sentient beings, and donât know each other. Their lives are as bad as Rickyâs.
Our natural response is to think that World FL is twice as bad as World R. But where could it possibly be twice as bad? Foghornâs life is bad in his consciousness and nowhere else. Leghornâs life is bad in his consciousness and nowhere else.
Where is their world twice as bad as Rickyâs?
Nowhere.
Okay, yes, I admit it is twice as bad in your mind and my mind. But we are not part of that universe. Imagine that these worlds are unknown to any other sentient being. Then there is simply nowhere that World FL is worse than World R.
In this universe, there are three worlds and only three worlds: one in each of their minds.
Tell me where I am factually wrong. Please, Iâm asking you. My life would be much easier and happier if you would.
Donât say that the implications of this insight leads to absurd conclusions that offend our intuitions. I already know that! Just tell me where am I factually wrong.
I know (oh, yes, I know) that this seems like it canât possibly be right. This is because we canât help but be utilitarian in this regard, just like we canât help but feel like we are in control of our consciousness and our decisions and our choices.
But I can see no way around this simple fact: morally-relevant âbadnessâ exists only in individual consciousnesses.