Utilitarianism without accounting for longterm consequences straightforwardly says that you should be willing to steal from the middle class to send money to poor people (even if some of those people kill themselves) or kill 1 person to save 2.
Possibly the most visible element in EA utilitarianism is literally called “longtermism,” so I am not sure this objection is relevant to utilitarianism as practiced here.
But I understand your objection: conceivably, you could find yourself in a situation where, in your honest judgment, the very best thing you can do for the world is to commit a terrible crime.
The problem is that when people design these thought experiments, they often set it up in such a way as to make people reject the crime on utilitarian grounds. For example, I’m sure you’ve heard the surgeon example—should a surgeon kill one healthy patient to harvest their organs and transplant them into 5 other patients to save their lives?
For most people, they feel this is repugnant. But the natural way to argue against it is with utilitarianism itself. If we did this, patients would flee from surgeons, even fight them. Sick people who didn’t want to have somebody murdered to save their own lives would die rather than seek medical treatment. We probably get a lot more QALYs by leaving healthy people alive than by killing them for their organs to put in people who probably have other underlying pathologies.
These are just natural, obvious consequences of trying to implement this rule. By contrast, deontological and virtue ethics objections to this practice sound weak. “Doctors SWORE AN OATH to do no harm!” “Medicine is about practicing the virtue of beneficence!” Those sound like slogans.
Utilitarianism may, in specific and, for all practical purposes, exceedingly rare circumstances, cause somebody to do something awful to achieve a good outcome. But at all other times, utilitarianism motivates you working as hard as you can to avoid ever being put in such circumstances in the first place.
Yes, I agree that believing the world may be about to end would tend to motivate more rules-breaking behavior in order to avoid that outcome. I’ll say that I’ve never heard anybody make the argument “Yes, AGI is about to paperclip the world, but we should not break any rules to avoid that from happening because that would be morally wrong.”
Usually, the argument seems to be “Yes, AGI is about to paperclip the world, but we still have time to do something about it and breaking rules will do more harm than good in expectation,” or else “No, AGI is not about to paperclip the world, so it provides no justification for breaking rules.”
I would be interested to see somebody bite the bullet and say:
The world is about to be destroyed.
There is one viable strategy for averting that outcome, but it requires a lot of rule-breaking.
We should not take that strategy, due to the world-breaking, and let the world be destroyed instead.
Possibly the most visible element in EA utilitarianism is literally called “longtermism,” so I am not sure this objection is relevant to utilitarianism as practiced here.
But I understand your objection: conceivably, you could find yourself in a situation where, in your honest judgment, the very best thing you can do for the world is to commit a terrible crime.
The problem is that when people design these thought experiments, they often set it up in such a way as to make people reject the crime on utilitarian grounds. For example, I’m sure you’ve heard the surgeon example—should a surgeon kill one healthy patient to harvest their organs and transplant them into 5 other patients to save their lives?
For most people, they feel this is repugnant. But the natural way to argue against it is with utilitarianism itself. If we did this, patients would flee from surgeons, even fight them. Sick people who didn’t want to have somebody murdered to save their own lives would die rather than seek medical treatment. We probably get a lot more QALYs by leaving healthy people alive than by killing them for their organs to put in people who probably have other underlying pathologies.
These are just natural, obvious consequences of trying to implement this rule. By contrast, deontological and virtue ethics objections to this practice sound weak. “Doctors SWORE AN OATH to do no harm!” “Medicine is about practicing the virtue of beneficence!” Those sound like slogans.
Utilitarianism may, in specific and, for all practical purposes, exceedingly rare circumstances, cause somebody to do something awful to achieve a good outcome. But at all other times, utilitarianism motivates you working as hard as you can to avoid ever being put in such circumstances in the first place.
I think there’s additional factors that make classical total utilitarians in EA more likely to severely violate rules:
x-risk mitigation has close to infinite expected value.
And
AI timelines mean that violating rules is likely to not have harmful long-term effects.
Yes, I agree that believing the world may be about to end would tend to motivate more rules-breaking behavior in order to avoid that outcome. I’ll say that I’ve never heard anybody make the argument “Yes, AGI is about to paperclip the world, but we should not break any rules to avoid that from happening because that would be morally wrong.”
Usually, the argument seems to be “Yes, AGI is about to paperclip the world, but we still have time to do something about it and breaking rules will do more harm than good in expectation,” or else “No, AGI is not about to paperclip the world, so it provides no justification for breaking rules.”
I would be interested to see somebody bite the bullet and say:
The world is about to be destroyed.
There is one viable strategy for averting that outcome, but it requires a lot of rule-breaking.
We should not take that strategy, due to the world-breaking, and let the world be destroyed instead.