I think welfare-based benificience, impartiality and at least limited aggregation do the most important work of thin utilitarianism, and I don’t think you need additivity or that any harm can be outweighed by a large enough sum of tiny benefits, so that we should allow someone to be electrocuted in an accident to avoid interrupting a show a very large crowd is enjoying.
“Limited aggregation” allows you to say that two people suffering is worse than one and make some tradeoffs between numbers and severity without very small changes in welfare aggregating across separate individuals to outweigh large changes. “Limited aggregation” is a term in the literature, and I think it usually requires giving up the independence of irrelevant alternatives.
Almost all social welfare functions that satisfy the independence of irrelevant alternatives allow small changes to outweigh large changes. That includes non-additive but aggregative social welfare functions. See Spears and Budolfson:
It’s obvious that utilitarianism does this. Consider also maximin. Maximin requires you to focus entirely on the worst off individual (or individuals, if there are ties). This might seem good because it means preventing the worst states, but it also means even preferring to prevent a tiny harm to the worst off (or a worse off) over bringing someone down to their level of welfare. E.g., one extra pin prick to someone being tortured anyway outweighs the (only very slightly less bad) torture of someone who wouldn’t have otherwise been tortured. More continuous versions of maximin, like moderate tradeoff view/rank-discounted utilitarianism, have the same implications in some cases, which will depend on the numbers involved.
Limited aggregation allows you to make some intuitive tradeoffs without extreme prioritization like maximin or allowing tiny harms to aggregate outweigh large harms.
On the other hand, there are views that reject the independence of irrelevant alternatives but don’t allow any aggregation at all, and require you to minimize the greatest individual loss in welfare (not maximize the worst off state or maximize the welfare of the worst off individual, like maximin). This doesn’t allow enough tradeoffs either, in my view. Scanlon the contractualist and Tom Regan the deontological animal rights theorist endorsed such a principle, as “the greater burden principle” and “the harm principle”, respectively. Maybe also the animal advocate Richard Ryder, with his “painism”, unless that is just a form of maximin.
Thanks, Michael. This is what I’ve been looking for. I’ll check out your links. I tend to agree with Ryder, although I don’t know how thorough his framework is. Thanks again. PS: Hey Michael, those links were interesting. Do you have a good link to go into more about “limited aggregation”? Thanks, -Matthew Michael
I understand that you lose a lot (and I appreciate your blog posts). But that is not an argument that additivity is correct. As I’ve written for my upcoming book:
Imagine a universe that has only two worlds, World R and World FL. In World R, Ricky the Rooster is the only sentient being, and is suffering in an absolutely miserable life.
This is bad. But where is it bad? In Ricky’s consciousness. And nowhere else.
On World FL, Rooster Foghorn is living in one forest and Rooster Leghorn is living in a separate forest. They are the World FL’s only sentient beings, and don’t know each other. Their lives are as bad as Ricky’s.
Our natural response is to think that World FL is twice as bad as World R. But where could it possibly be twice as bad? Foghorn’s life is bad in his consciousness and nowhere else. Leghorn’s life is bad in his consciousness and nowhere else.
Where is their world twice as bad as Ricky’s?
Nowhere.
Okay, yes, I admit it is twice as bad in your mind and my mind. But we are not part of that universe. Imagine that these worlds are unknown to any other sentient being. Then there is simply nowhere that World FL is worse than World R.
In this universe, there are three worlds and only three worlds: one in each of their minds.
Tell me where I am factually wrong. Please, I’m asking you. My life would be much easier and happier if you would.
Don’t say that the implications of this insight leads to absurd conclusions that offend our intuitions. I already know that! Just tell me where am I factually wrong.
I know (oh, yes, I know) that this seems like it can’t possibly be right. This is because we can’t help but be utilitarian in this regard, just like we can’t help but feel like we are in control of our consciousness and our decisions and our choices.
But I can see no way around this simple fact: morally-relevant “badness” exists only in individual consciousnesses.
I think welfare-based benificience, impartiality and at least limited aggregation do the most important work of thin utilitarianism, and I don’t think you need additivity or that any harm can be outweighed by a large enough sum of tiny benefits, so that we should allow someone to be electrocuted in an accident to avoid interrupting a show a very large crowd is enjoying.
Michael, this is kinda what I’m looking for. What does “limited aggregation” mean / do in your case.
Sorry I didn’t see this until now.
“Limited aggregation” allows you to say that two people suffering is worse than one and make some tradeoffs between numbers and severity without very small changes in welfare aggregating across separate individuals to outweigh large changes. “Limited aggregation” is a term in the literature, and I think it usually requires giving up the independence of irrelevant alternatives.
Almost all social welfare functions that satisfy the independence of irrelevant alternatives allow small changes to outweigh large changes. That includes non-additive but aggregative social welfare functions. See Spears and Budolfson:
https://philpapers.org/rec/BUDWTR
http://www.stafforini.com/docs/Spears & Budolfson—Repugnant conclusions.pdf
It’s obvious that utilitarianism does this. Consider also maximin. Maximin requires you to focus entirely on the worst off individual (or individuals, if there are ties). This might seem good because it means preventing the worst states, but it also means even preferring to prevent a tiny harm to the worst off (or a worse off) over bringing someone down to their level of welfare. E.g., one extra pin prick to someone being tortured anyway outweighs the (only very slightly less bad) torture of someone who wouldn’t have otherwise been tortured. More continuous versions of maximin, like moderate tradeoff view/rank-discounted utilitarianism, have the same implications in some cases, which will depend on the numbers involved.
Limited aggregation allows you to make some intuitive tradeoffs without extreme prioritization like maximin or allowing tiny harms to aggregate outweigh large harms.
On the other hand, there are views that reject the independence of irrelevant alternatives but don’t allow any aggregation at all, and require you to minimize the greatest individual loss in welfare (not maximize the worst off state or maximize the welfare of the worst off individual, like maximin). This doesn’t allow enough tradeoffs either, in my view. Scanlon the contractualist and Tom Regan the deontological animal rights theorist endorsed such a principle, as “the greater burden principle” and “the harm principle”, respectively. Maybe also the animal advocate Richard Ryder, with his “painism”, unless that is just a form of maximin.
Thanks, Michael. This is what I’ve been looking for. I’ll check out your links.
I tend to agree with Ryder, although I don’t know how thorough his framework is.
Thanks again.
PS: Hey Michael, those links were interesting. Do you have a good link to go into more about “limited aggregation”?
Thanks,
-Matthew Michael
I understand that you lose a lot (and I appreciate your blog posts). But that is not an argument that additivity is correct. As I’ve written for my upcoming book:
Imagine a universe that has only two worlds, World R and World FL. In World R, Ricky the Rooster is the only sentient being, and is suffering in an absolutely miserable life.
This is bad. But where is it bad? In Ricky’s consciousness. And nowhere else.
On World FL, Rooster Foghorn is living in one forest and Rooster Leghorn is living in a separate forest. They are the World FL’s only sentient beings, and don’t know each other. Their lives are as bad as Ricky’s.
Our natural response is to think that World FL is twice as bad as World R. But where could it possibly be twice as bad? Foghorn’s life is bad in his consciousness and nowhere else. Leghorn’s life is bad in his consciousness and nowhere else.
Where is their world twice as bad as Ricky’s?
Nowhere.
Okay, yes, I admit it is twice as bad in your mind and my mind. But we are not part of that universe. Imagine that these worlds are unknown to any other sentient being. Then there is simply nowhere that World FL is worse than World R.
In this universe, there are three worlds and only three worlds: one in each of their minds.
Tell me where I am factually wrong. Please, I’m asking you. My life would be much easier and happier if you would.
Don’t say that the implications of this insight leads to absurd conclusions that offend our intuitions. I already know that! Just tell me where am I factually wrong.
I know (oh, yes, I know) that this seems like it can’t possibly be right. This is because we can’t help but be utilitarian in this regard, just like we can’t help but feel like we are in control of our consciousness and our decisions and our choices.
But I can see no way around this simple fact: morally-relevant “badness” exists only in individual consciousnesses.