I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being… If that means adding more ice cream or art into agents’ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal… Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as “advancing the welfare of people within a state.” Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if you’re saying that the habits of EAs are not sustainable (and thus aren’t doing the greatest good, ultimately), you’re not criticizing utilitarianism. Rather, you’re saying they are not being the best utilitarians they can be. You can’t challenge utilitarianism by saying that utilitarians’ choices don’t produce the most good. Then you’re just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system there’s no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now “historically utilitarianism has led to less utility” does not strictly imply that in the future “belief in utilitarianism predictably causes the world to have less utility.” But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
I’m personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isn’t much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askell’s distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAsspecifically would be destructive with it, there’s still a local case against it.
I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being… If that means adding more ice cream or art into agents’ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal… Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as “advancing the welfare of people within a state.” Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if you’re saying that the habits of EAs are not sustainable (and thus aren’t doing the greatest good, ultimately), you’re not criticizing utilitarianism. Rather, you’re saying they are not being the best utilitarians they can be. You can’t challenge utilitarianism by saying that utilitarians’ choices don’t produce the most good. Then you’re just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system there’s no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now “historically utilitarianism has led to less utility” does not strictly imply that in the future “belief in utilitarianism predictably causes the world to have less utility.” But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
I’m personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isn’t much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askell’s distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
And depending on details, there might indeed be a moral obligation to reduce utilitarianism.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAs specifically would be destructive with it, there’s still a local case against it.