I’m not sure the argument is specifically about fraud.
I think the argument is more that “when ends justify the means, you are far more likely to break norms / rules / laws”, which is a very old objection to utilitarianism and doesn’t rely on the FTX example.
No, the argument is self-contradictory in a way that your version is not. “When the ends justify the means,” only those means that are, in fact, justified by the ends become justifiable. Means that are not justified by the ends do not become justifiable.
It would be fair to say “some forms of utilitarianism license fraudulent behavior in exchange for a sufficient altruistic outcome.”
Of course, we can also say “some forms of deontology advocate we allow the world to be destroyed before we break a rule.”
I don’t think either line of argument leads to productive moral debate.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
I think these weaknesses of utilitarianism and deontology are useful and given that EA contains lots of utilitarians / is closer to utilitarianism than common sense ethics / is watered down utilitarianism, I think it’s good for EAs to keep the major weaknesses of utilitarianism at the front of their minds.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
Claiming this as a “weakness” of utilitarianism needs justification, and I stridently disagree with characterizing EA utilitarianism as “watered down.” It is well-thought-through and nuanced.
A weakness in the sense that it severely contradicts our intuitions on morality and severely violates other moral systems, because under classical total utilitarianism this would not only justify fraud to donate to AI safety, it would justify violence against AI companies too.
(I understand that not everyone agrees that violating moral intuitions makes a moral system weaker, but I don’t want to debate that because I don’t think there’s much point in rehashing existing work on meta-ethics).
I mean that EA is watered-down classical utilitarianism.
I don’t think that’s bad because classical utilitarianism would support committing fraud to give more money to AI safety, especially with short AI timelines. And my understanding is that the consensus in EA is that we should not commit fraud.
I’m not sure the argument is specifically about fraud.
I think the argument is more that “when ends justify the means, you are far more likely to break norms / rules / laws”, which is a very old objection to utilitarianism and doesn’t rely on the FTX example.
No, the argument is self-contradictory in a way that your version is not. “When the ends justify the means,” only those means that are, in fact, justified by the ends become justifiable. Means that are not justified by the ends do not become justifiable.
It would be fair to say “some forms of utilitarianism license fraudulent behavior in exchange for a sufficient altruistic outcome.”
Of course, we can also say “some forms of deontology advocate we allow the world to be destroyed before we break a rule.”
I don’t think either line of argument leads to productive moral debate.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
I think these weaknesses of utilitarianism and deontology are useful and given that EA contains lots of utilitarians / is closer to utilitarianism than common sense ethics / is watered down utilitarianism, I think it’s good for EAs to keep the major weaknesses of utilitarianism at the front of their minds.
Claiming this as a “weakness” of utilitarianism needs justification, and I stridently disagree with characterizing EA utilitarianism as “watered down.” It is well-thought-through and nuanced.
A weakness in the sense that it severely contradicts our intuitions on morality and severely violates other moral systems, because under classical total utilitarianism this would not only justify fraud to donate to AI safety, it would justify violence against AI companies too.
(I understand that not everyone agrees that violating moral intuitions makes a moral system weaker, but I don’t want to debate that because I don’t think there’s much point in rehashing existing work on meta-ethics).
I mean that EA is watered-down classical utilitarianism.
I don’t think that’s bad because classical utilitarianism would support committing fraud to give more money to AI safety, especially with short AI timelines. And my understanding is that the consensus in EA is that we should not commit fraud.