I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being⌠If that means adding more ice cream or art into agentsâ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal⌠Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as âadvancing the welfare of people within a state.â Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if youâre saying that the habits of EAs are not sustainable (and thus arenât doing the greatest good, ultimately), youâre not criticizing utilitarianism. Rather, youâre saying they are not being the best utilitarians they can be. You canât challenge utilitarianism by saying that utilitariansâ choices donât produce the most good. Then youâre just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system thereâs no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now âhistorically utilitarianism has led to less utilityâ does not strictly imply that in the future âbelief in utilitarianism predictably causes the world to have less utility.â But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
Iâm personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isnât much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askellâs distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAsspecifically would be destructive with it, thereâs still a local case against it.
I think we are just using two different definitions of utilitarian. I am talking about maximizing well-being⌠If that means adding more ice cream or art into agentsâ lives, then utilitarianism demands ice cream and art. Utilitarianism regards the goal⌠Maximization of net value of experience.
A more apt comparison than a specific political system such as communism, capitalism, or mercantilism would be a political philosophy that defined the goal of governmental systems as âadvancing the welfare of people within a state.â Then, different political systems could be evaluated by how well they achieve that goal.
Similarly, utilitarianism is agnostic as to whether one should drink Huel, produce and enjoy art, work X hours per week, etc. All of these questions come down to whether the agent is producing better outcomes for the world.
So if youâre saying that the habits of EAs are not sustainable (and thus arenât doing the greatest good, ultimately), youâre not criticizing utilitarianism. Rather, youâre saying they are not being the best utilitarians they can be. You canât challenge utilitarianism by saying that utilitariansâ choices donât produce the most good. Then youâre just challenging choices made by them within a utilitarian lens.
If it were the case that belief in utilitarianism predictably causes the world to have less utility, then under basically any common moral system thereâs no strong case for spreading utilitarianism[1]. In such a world, there is of course no longer a utilitarian case for spreading utilitarianism, and afaik the other common ethical systems would not endorse spreading utilitarianism, especially if it reduces net utility.
Now âhistorically utilitarianism has led to less utilityâ does not strictly imply that in the future âbelief in utilitarianism predictably causes the world to have less utility.â But it is extremely suggestive, and more so if it looks overdetermined rather than due to a specific empirical miscalculation, error in judgement, or bad actor.
Iâm personally pretty neutral on whether utilitarianism has been net negative. The case against is that I think Bentham was unusually far-seeing and correct relative to his contemporaries. The strongest case for in my opinion probably comes from people in our cluster of ideas[2] accelerating AI capabilities (runner ups include FTX, some specific culty behaviors, and well-poisoning of good ideas), though my guess is that there isnât much evidence that the more utilitarian EAs are more responsible.
On a more theoretical level, Askellâs distinction between utilitarianism as a criterion of rightness vs decision procedure is also relevant here.
And depending on details, there might indeed be a moral obligation to reduce utilitarianism.
Note that even if utilitarianism overall is a net positive moral system for people to have, if it were the case that EAs specifically would be destructive with it, thereâs still a local case against it.