If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system there’s no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now “historically utilitarianism has led to less utility” does not strictly imply that in the future “belief in utilitarianism predictably causes the world to have less utility.” But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
I’m personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isn’t much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askell’s distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAsspecifically would be destructive with it, there’s still a local case against it.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system there’s no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now “historically utilitarianism has led to less utility” does not strictly imply that in the future “belief in utilitarianism predictably causes the world to have less utility.” But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
I’m personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isn’t much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askell’s distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
And depending on details, there might indeed be a moral obligation to reduce utilitarianism.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAs specifically would be destructive with it, there’s still a local case against it.