Thanks for this fantastic post and your comments, I both enjoyed and learned from them.
Is the following a reasonable summary of your position? Is there anything you would change or add?
Utilitarianism is not a good theory of everything for morality. It’s helpful and important in some situations, such as when we have to make trade offs about costs and benefits that are relatively commensurate, deal with particular types of uncertainty, and for generating insights. But it doesn’t really work or help in other ways or situations. There are several reasons for this or ideas gesturing in this direction. For one, no theory or model is a theory of everything in any domain, so why should utilitarianism be any different for ethics? For another, utilitarianism doesn’t help us when we have to trade off different kinds of values against each other. Another is that in some situations, we inevitably have to exercise context-dependent judgment that cannot be captured by utilitarianism.
This is not an anti-intellectualism argument that system construction is useless. Rather, this is a judgment about the limits of a particular model or theory. While such a judgment may not be justified from some kind of first principle or more fundamental system, this doesn’t mean the judgment is wrong or unjustified. Part of the fundamental critique is that it is impossible/unworkable to find some kind of complete system that would guide our thinking in all situations; besides infinite regress problems, it is inescapable that we have to make particular moral judgments in specific contexts. This problem cannot be solved by an advanced AI or by assuming that there must be a single theory of everything for morality. Abstract theorizing cannot solve everything.
Utilitarianism has been incredibly helpful, probably critical, for effective altruism, such as in the argument for donating to the most effective global health charities or interventions. It can also lead to undesirable value dictatorship and fanaticism.
But this doesn’t mean EA necessarily has a problem with fanaticism either. It is possible to use utilitarianism in a wise and non-dogmatic manner. In practice most EAs already do something like this, and their actions are influenced by judgment, restraint, and pluralism of values, whatever their stated or endorsed beliefs might be.
The problem is that they don’t really understand why or how they do this beyond that it is desirable and perhaps necessary [is this right?]. People do get off the train to crazy town at some point, but don’t really know how to justify it within their professed/desired framework beside some ad-hoc patches like moral uncertainty. The desire for a complete system that would guide all actions seems reasonable to EAs. EAs lack an understanding of the limits of systemic thinking.
EA should move away from thinking that utilitarianism and abstract moral theories can solve all problems of morality, and instead seek to understand the world as it is better. This may lead to improvements to EA efforts in policy, politics, and other social contexts where game-theoretic considerations and judgment play critical roles, and where consequentialist reasoning can be detrimental.
Thank you very much for writing this—I have found it incredibly useful and wish all complex philosophical texts would be followed by something like this. Just wanted to give you extra feel-good for doing this because it is well deserved as it helped me immensely to fit all that I read into place.
Thanks for this fantastic post and your comments, I both enjoyed and learned from them.
Is the following a reasonable summary of your position? Is there anything you would change or add?
Thank you very much for writing this—I have found it incredibly useful and wish all complex philosophical texts would be followed by something like this. Just wanted to give you extra feel-good for doing this because it is well deserved as it helped me immensely to fit all that I read into place.
Thanks so much! I’m really glad this was helpful for you.
Thank you for your fantastic summary! Yes, I think that’s a great account of what I’m saying in this post.