I’m not fully sure to what extent this piece means to argue for i) utilitarianism beats deontology, versus ii) utilitarianism is the correct moral theory or at least close to the correct moral theory. (At times it felt to me like the former, and at times the latter, though it’s very possible that I did not read closely enough.)
To the extent that ii is the intended conclusion, I think this is overconfident on a couple of counts. Firstly, one’s all-things-considered view should probably take into account that only 30% of academic philosophers are consequentialists (see Bourget & Chalmers, 2020, p. 8; note that consequentialism is a superset of utilitarianism). Secondly, reasons relating to infinite ethics. As Joe Carlsmith (2022) puts it:
I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure.
…
But I think infinite ethics changes this picture. As I mentioned above: in the land of the infinite, the bullet-biting utilitarian train runs out of track. You have to get out and wander blindly. The issue isn’t that you’ve become fanatical about infinities: that’s a bullet, like the others, that you’re willing to bite. The issue is that once you’ve resolved to be 100% obsessed with infinities, you don’t know how to do it. Your old thing (e.g., “just sum up the pleasure vs. pain”) doesn’t make sense in infinite contexts, so your old trick – just biting whatever bullets your old thing says to bite – doesn’t work (or it leads to horrific bullets, like trading Heaven + Speck for Hell + Lollypop, plus a tiny chance of the lizard). And when you start trying to craft a new version of your old thing, you run headlong into Pareto-violations, incompleteness, order-dependence, spatio-temporal sensitivities, appeals to persons as fundamental units of concern, and the rest. In this sense, you start having problems you thought you transcended – problems like the problems the other people [e.g., deontologists] had. You start having to rebuild yourself on new and jankier foundations.
…
All in all, I currently think of infinite ethics as a lesson in humility: humility about how far standard ethical theory extends; humility about [...] how little we might have seen or understood.
It was trying to argue for 2. I think that if we give up any side constraints, which is what my piece argued for, we get something very near utilitarianism—at the very least consequentialism. Infinitarian ethics is everyone’s problem.
I respectfully disagree. Firstly, that is by no means the last word on infinite ethics (see papers by Manheim and Sandberg, and a more recent paper out of the Global Priorities Institute). Prematurely abandoning utilitarianism because of infinites is a bit like (obviously the analogy is not perfect) abandoning the general theory of relativity because it can’t deal with infinities.
Secondly, we should act as if we are in a finite world: it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along. It is telling that virtually no one has substantively changed their actions as a result of infinite ethics. This is sensible and prudent.
Thirdly, in an infinite world, we should understand that utilitarianism is not about maximising some abstract utility function or number in the sky, but about improving the conscious experiences of sentient beings. Infinities don’t change the fact that I can reduce the suffering of the person in front of me, or the sentient being on the other side of the world, or the fact that this is good for them. And there are good practical, utilitarian reasons not to spend one’s time focusing on other potential worlds.
Thank you for engaging. Respectfully, however, I’m not compelled by your response.
Prematurely abandoning utilitarianism because of infinites is a bit like [...].
I’m not saying that we should prematurely abandon utilitarianism (though perhaps I did not make this clear in my above comment). I’m saying that we do not have an “ultimate argument” for utilitarianism at present, and that there’s a good chance that further reflection on known unknowns such as infinite ethics will reveal that our current conception of utilitarianism—in so far as we’re putting it forward as a “correct moral theory” candidate—is non-trivially flawed.
Secondly, we should act as if we are in a finite world [...] This is sensible and prudent.
I disagree. I think we should act to do the most good, and this may involve, for example, evidentially cooperating with other civilizations across the potentially infinite universe/multiverse. Your sentence “it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along” seems to me to be claiming that we should abandon expected value calculus (or that we should set our credence on the universe/multiverse being infinite to zero, notwithstanding the possibility that we could reduce suffering by a greater amount by having and acting on a best guess credence), which I view as incorrect.
Thirdly, [...] Infinities don’t change the fact that I can reduce the suffering of the person in front of me, or the sentient being on the other side of the world
I believe this claim falls foul of the Pareto improvement plus agent-neutrality impossibility result in infinite ethics, once you try to decide on whose suffering to reduce. (Another objection some—e.g., Bostrom—might make is that if there is infinite total suffering, then reducing suffering by a finite amount does nothing to reduce total suffering. But I’m personally less convinced by this flavor of objection.)
Thanks for your response. It seems we disagree on much less than I had initially assumed. My response was mostly intended for someone who has prematurely become a nihilist (as apparently happened to one of Carlsmith’s friends), whereas you remain committed to doing the most good. And I was mainly addressing the last flavour of objection you mention.
I’m not fully sure to what extent this piece means to argue for i) utilitarianism beats deontology, versus ii) utilitarianism is the correct moral theory or at least close to the correct moral theory. (At times it felt to me like the former, and at times the latter, though it’s very possible that I did not read closely enough.)
To the extent that ii is the intended conclusion, I think this is overconfident on a couple of counts. Firstly, one’s all-things-considered view should probably take into account that only 30% of academic philosophers are consequentialists (see Bourget & Chalmers, 2020, p. 8; note that consequentialism is a superset of utilitarianism). Secondly, reasons relating to infinite ethics. As Joe Carlsmith (2022) puts it:
It was trying to argue for 2. I think that if we give up any side constraints, which is what my piece argued for, we get something very near utilitarianism—at the very least consequentialism. Infinitarian ethics is everyone’s problem.
I respectfully disagree. Firstly, that is by no means the last word on infinite ethics (see papers by Manheim and Sandberg, and a more recent paper out of the Global Priorities Institute). Prematurely abandoning utilitarianism because of infinites is a bit like (obviously the analogy is not perfect) abandoning the general theory of relativity because it can’t deal with infinities.
Secondly, we should act as if we are in a finite world: it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along. It is telling that virtually no one has substantively changed their actions as a result of infinite ethics. This is sensible and prudent.
Thirdly, in an infinite world, we should understand that utilitarianism is not about maximising some abstract utility function or number in the sky, but about improving the conscious experiences of sentient beings. Infinities don’t change the fact that I can reduce the suffering of the person in front of me, or the sentient being on the other side of the world, or the fact that this is good for them. And there are good practical, utilitarian reasons not to spend one’s time focusing on other potential worlds.
Thank you for engaging. Respectfully, however, I’m not compelled by your response.
I’m not saying that we should prematurely abandon utilitarianism (though perhaps I did not make this clear in my above comment). I’m saying that we do not have an “ultimate argument” for utilitarianism at present, and that there’s a good chance that further reflection on known unknowns such as infinite ethics will reveal that our current conception of utilitarianism—in so far as we’re putting it forward as a “correct moral theory” candidate—is non-trivially flawed.
I disagree. I think we should act to do the most good, and this may involve, for example, evidentially cooperating with other civilizations across the potentially infinite universe/multiverse. Your sentence “it would be seen as terribly callous of someone not to have relieved the suffering of others if it turned out we were in a finite universe all along” seems to me to be claiming that we should abandon expected value calculus (or that we should set our credence on the universe/multiverse being infinite to zero, notwithstanding the possibility that we could reduce suffering by a greater amount by having and acting on a best guess credence), which I view as incorrect.
I believe this claim falls foul of the Pareto improvement plus agent-neutrality impossibility result in infinite ethics, once you try to decide on whose suffering to reduce. (Another objection some—e.g., Bostrom—might make is that if there is infinite total suffering, then reducing suffering by a finite amount does nothing to reduce total suffering. But I’m personally less convinced by this flavor of objection.)
Thanks for your response. It seems we disagree on much less than I had initially assumed. My response was mostly intended for someone who has prematurely become a nihilist (as apparently happened to one of Carlsmith’s friends), whereas you remain committed to doing the most good. And I was mainly addressing the last flavour of objection you mention.