The situation at FTX is illustrative of a central flaw in utilitarianism. When you start thinking the ends justify the means, anything becomes justifiable.
Trust is so important. Doing the right thing is so important.
There is a possibility SBF committed fraud motivated directly by his own utilitarian beliefs—a charitable Ponzi scheme.
But your argument is that utilitarianism systematically generates fraud in a way alternative moral systems do not. Finding one potential bad example is nowhere near enough to justify such a claim.
I’m not sure the argument is specifically about fraud.
I think the argument is more that “when ends justify the means, you are far more likely to break norms / rules / laws”, which is a very old objection to utilitarianism and doesn’t rely on the FTX example.
No, the argument is self-contradictory in a way that your version is not. “When the ends justify the means,” only those means that are, in fact, justified by the ends become justifiable. Means that are not justified by the ends do not become justifiable.
It would be fair to say “some forms of utilitarianism license fraudulent behavior in exchange for a sufficient altruistic outcome.”
Of course, we can also say “some forms of deontology advocate we allow the world to be destroyed before we break a rule.”
I don’t think either line of argument leads to productive moral debate.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
I think these weaknesses of utilitarianism and deontology are useful and given that EA contains lots of utilitarians / is closer to utilitarianism than common sense ethics / is watered down utilitarianism, I think it’s good for EAs to keep the major weaknesses of utilitarianism at the front of their minds.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
Claiming this as a “weakness” of utilitarianism needs justification, and I stridently disagree with characterizing EA utilitarianism as “watered down.” It is well-thought-through and nuanced.
A weakness in the sense that it severely contradicts our intuitions on morality and severely violates other moral systems, because under classical total utilitarianism this would not only justify fraud to donate to AI safety, it would justify violence against AI companies too.
(I understand that not everyone agrees that violating moral intuitions makes a moral system weaker, but I don’t want to debate that because I don’t think there’s much point in rehashing existing work on meta-ethics).
I mean that EA is watered-down classical utilitarianism.
I don’t think that’s bad because classical utilitarianism would support committing fraud to give more money to AI safety, especially with short AI timelines. And my understanding is that the consensus in EA is that we should not commit fraud.
I will try to read everyone’s comments and the related articles that have been shared. I haven’t yet, but I’m going on a trip today — I may have time on the way.
To be clear: I am against utilitarianism. It is not my personal value system. It seems like an SBF-type-figure could justify any action if the lives of trillions of future people are in the balance.
The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.
It seems like an SBF-type-figure could justify any action if the lives of trillions of future people are in the balance.
This doesn’t seem specific to utilitarianism. I think most ethical views would suggest that many radical actions would be acceptable if billions of lives hung in the balance. The ethical views that wouldn’t allow such radical actions would have their own crazy implications. Utilitarianism does make it easier to justify such actions, but with numbers so large I don’t think it generally makes a difference.
Even if other views in fact have the same implications as utilitarianism here, it’s possible that that the effects of believing in utiltarianism are particularly psychological pernicious in this sort of context. (Though my guess is the psychologically important things are just take high stakes seriously, lack of risk aversion, and being prepared to buck common-sense, and that those are correlated with believing utilitarianism but mostly not caused by it. But that is just a guess.)
‘The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.’ Quite clearly, though, this has blown up in SBF’s face. Maybe the expected value was still good, but it’s entirely possible that the (many) utilitarians who think bucking conventional morality and law to this degree nearly always does more harm than good are simply correct, in which case utilitarianism itself condemns doing so (at least absent very strong evidence that your case is one of the exceptions).
The situation at FTX is illustrative of a central flaw in utilitarianism. When you start thinking the ends justify the means, anything becomes justifiable.
Trust is so important. Doing the right thing is so important.
I don’t really know what else to say.
There is a possibility SBF committed fraud motivated directly by his own utilitarian beliefs—a charitable Ponzi scheme.
But your argument is that utilitarianism systematically generates fraud in a way alternative moral systems do not. Finding one potential bad example is nowhere near enough to justify such a claim.
I’m not sure the argument is specifically about fraud.
I think the argument is more that “when ends justify the means, you are far more likely to break norms / rules / laws”, which is a very old objection to utilitarianism and doesn’t rely on the FTX example.
No, the argument is self-contradictory in a way that your version is not. “When the ends justify the means,” only those means that are, in fact, justified by the ends become justifiable. Means that are not justified by the ends do not become justifiable.
It would be fair to say “some forms of utilitarianism license fraudulent behavior in exchange for a sufficient altruistic outcome.”
Of course, we can also say “some forms of deontology advocate we allow the world to be destroyed before we break a rule.”
I don’t think either line of argument leads to productive moral debate.
Right, but utilitarianism has a lower bar for deciding that means are justifiable than other ethical views do (things just need to be overall net positive, even if means are extremely harmful).
I think these weaknesses of utilitarianism and deontology are useful and given that EA contains lots of utilitarians / is closer to utilitarianism than common sense ethics / is watered down utilitarianism, I think it’s good for EAs to keep the major weaknesses of utilitarianism at the front of their minds.
Claiming this as a “weakness” of utilitarianism needs justification, and I stridently disagree with characterizing EA utilitarianism as “watered down.” It is well-thought-through and nuanced.
A weakness in the sense that it severely contradicts our intuitions on morality and severely violates other moral systems, because under classical total utilitarianism this would not only justify fraud to donate to AI safety, it would justify violence against AI companies too.
(I understand that not everyone agrees that violating moral intuitions makes a moral system weaker, but I don’t want to debate that because I don’t think there’s much point in rehashing existing work on meta-ethics).
I mean that EA is watered-down classical utilitarianism.
I don’t think that’s bad because classical utilitarianism would support committing fraud to give more money to AI safety, especially with short AI timelines. And my understanding is that the consensus in EA is that we should not commit fraud.
I will try to read everyone’s comments and the related articles that have been shared. I haven’t yet, but I’m going on a trip today — I may have time on the way.
To be clear: I am against utilitarianism. It is not my personal value system. It seems like an SBF-type-figure could justify any action if the lives of trillions of future people are in the balance.
The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.
This doesn’t seem specific to utilitarianism. I think most ethical views would suggest that many radical actions would be acceptable if billions of lives hung in the balance. The ethical views that wouldn’t allow such radical actions would have their own crazy implications. Utilitarianism does make it easier to justify such actions, but with numbers so large I don’t think it generally makes a difference.
Even if other views in fact have the same implications as utilitarianism here, it’s possible that that the effects of believing in utiltarianism are particularly psychological pernicious in this sort of context. (Though my guess is the psychologically important things are just take high stakes seriously, lack of risk aversion, and being prepared to buck common-sense, and that those are correlated with believing utilitarianism but mostly not caused by it. But that is just a guess.)
‘The utilitarians who aren’t taking radical actions to achieve their ends just have a failure of imagination and ambition relative to SBF.’ Quite clearly, though, this has blown up in SBF’s face. Maybe the expected value was still good, but it’s entirely possible that the (many) utilitarians who think bucking conventional morality and law to this degree nearly always does more harm than good are simply correct, in which case utilitarianism itself condemns doing so (at least absent very strong evidence that your case is one of the exceptions).