Apologies if this is rude, but your response is a good example of what I was talking about. I don’t think I’ll be able to write a short response without simply restating what I was saying in a slightly different way, but if you want to read a longer version of what I was talking about, you might be interested in my comment summarizing a much longer piece critiquing the use of utilitarianism in EA:
Utilitarianism is not a good theory of everything for morality. It’s helpful and important in some situations, such as when we have to make trade offs about costs and benefits that are relatively commensurate, deal with particular types of uncertainty, and for generating insights. But it doesn’t really work or help in other ways or situations. There are several reasons for this or ideas gesturing in this direction. For one, no theory or model is a theory of everything in any domain, so why should utilitarianism be any different for ethics? For another, utilitarianism doesn’t help us when we have to trade off different kinds of values against each other. Another is that in some situations, we inevitably have to exercise context-dependent judgment that cannot be captured by utilitarianism.
This is not an anti-intellectualism argument that system construction is useless. Rather, this is a judgment about the limits of a particular model or theory. While such a judgment may not be justified from some kind of first principle or more fundamental system, this doesn’t mean the judgment is wrong or unjustified. Part of the fundamental critique is that it is impossible/unworkable to find some kind of complete system that would guide our thinking in all situations; besides infinite regress problems, it is inescapable that we have to make particular moral judgments in specific contexts. This problem cannot be solved by an advanced AI or by assuming that there must be a single theory of everything for morality. Abstract theorizing cannot solve everything.
Utilitarianism has been incredibly helpful, probably critical, for effective altruism, such as in the argument for donating to the most effective global health charities or interventions. It can also lead to undesirable value dictatorship and fanaticism.
But this doesn’t mean EA necessarily has a problem with fanaticism either. It is possible to use utilitarianism in a wise and non-dogmatic manner. In practice most EAs already do something like this, and their actions are influenced by judgment, restraint, and pluralism of values, whatever their stated or endorsed beliefs might be.
The problem is that they don’t really understand why or how they do this beyond that it is desirable and perhaps necessary [is this right?]. People do get off the train to crazy town at some point, but don’t really know how to justify it within their professed/desired framework beside some ad-hoc patches like moral uncertainty. The desire for a complete system that would guide all actions seems reasonable to EAs. EAs lack an understanding of the limits of systemic thinking.
EA should move away from thinking that utilitarianism and abstract moral theories can solve all problems of morality, and instead seek to understand the world as it is better. This may lead to improvements to EA efforts in policy, politics, and other social contexts where game-theoretic considerations and judgment play critical roles, and where consequentialist reasoning can be detrimental.
No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I’m trying to say is, I wouldn’t really identify as a ‘utilitarian’ myself, so I don’t think I really have a vested interest in this debate. Nonetheless, I don’t think utilitarianism ‘breaks down’ in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I’m also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.
To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It’s very unclear to me how we can know that it’s ‘impossible or unworkable’ to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an ‘ad-hoc patch’. It wasn’t initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or ‘preferable’), just like with empirical uncertainty.
I think it was a good example (I changed the wording from ‘great’ to ‘good’) because my point was more about the role of abstract and formal theories of ethics rather than restricted to utilitarianism itself, and your response was defending abstract theories as the ultimate foundation for ethics. The point (which I am likely communicating badly to someone with different beliefs) is that formal systems have limits and are imperfectly applied by flawed humans with limited time, information, etc. It is all well and good to talk about making adjustments to theories to refine them, and indeed philosophers should do so, but applying them to real life is necessarily an imperfect process.
Apologies if this is rude, but your response is a good example of what I was talking about. I don’t think I’ll be able to write a short response without simply restating what I was saying in a slightly different way, but if you want to read a longer version of what I was talking about, you might be interested in my comment summarizing a much longer piece critiquing the use of utilitarianism in EA:
No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I’m trying to say is, I wouldn’t really identify as a ‘utilitarian’ myself, so I don’t think I really have a vested interest in this debate. Nonetheless, I don’t think utilitarianism ‘breaks down’ in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I’m also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.
To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It’s very unclear to me how we can know that it’s ‘impossible or unworkable’ to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an ‘ad-hoc patch’. It wasn’t initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or ‘preferable’), just like with empirical uncertainty.
I think it was a good example (I changed the wording from ‘great’ to ‘good’) because my point was more about the role of abstract and formal theories of ethics rather than restricted to utilitarianism itself, and your response was defending abstract theories as the ultimate foundation for ethics. The point (which I am likely communicating badly to someone with different beliefs) is that formal systems have limits and are imperfectly applied by flawed humans with limited time, information, etc. It is all well and good to talk about making adjustments to theories to refine them, and indeed philosophers should do so, but applying them to real life is necessarily an imperfect process.