The common-sense American lens to view these sorts of outcomes is a framework of personal responsibility. If SBF committed fraud, that is indicative of a problem with his personal character, not the moral philosophy he claims to subscribe to.
I think this reflects two facts:
American society-at-large doesn’t expect people to think hard about the moral implications of their views—certainly not to be systematic about it, and even less to subscribe to a named moral philosophy.
American society-at-large tends to have low moral standards and to embrace cynical explanations for behavior.
Indeed, this is a big part of why a lot of third parties view EA with skepticism and assume that since we’re weird and not obviously allied with their political coalition, we must have cynical motives and agendas. The same perspective that doubts EAs could have principled or non-selfish reasons to do beneficial things, also doubts that EAs could have principled or non-selfish reasons to do destructive things.
I wouldn’t conclude from this that EAs are uninfluenced by our philosophical commitments, in real life. I expect that SBF was influenced in various important ways, though personality, character, etc. surely played a large role too.
Yes, this. Most fraudsters don’t have such strongly held views on why the Kelly criterion for determining optimal bet size doesn’t apply to them! (SBF did a famous thread on this and Caroline’s tumblr has that line about how real EAs endorse high leverage and double-or-nothing flips).
I think it would be wrong to blame utilitarianism per se for what happened because the vast majority of utilitarians absolutely do care about the risk of ruin—as they should—but I think SBF’s own brand of EA-aligned thinking (I assume short AI/bioengineered pandemic timelines factored in here) played a huge role in why he took such insane risks.
I think this reflects two facts:
American society-at-large doesn’t expect people to think hard about the moral implications of their views—certainly not to be systematic about it, and even less to subscribe to a named moral philosophy.
American society-at-large tends to have low moral standards and to embrace cynical explanations for behavior.
Indeed, this is a big part of why a lot of third parties view EA with skepticism and assume that since we’re weird and not obviously allied with their political coalition, we must have cynical motives and agendas. The same perspective that doubts EAs could have principled or non-selfish reasons to do beneficial things, also doubts that EAs could have principled or non-selfish reasons to do destructive things.
I wouldn’t conclude from this that EAs are uninfluenced by our philosophical commitments, in real life. I expect that SBF was influenced in various important ways, though personality, character, etc. surely played a large role too.
Yes, this. Most fraudsters don’t have such strongly held views on why the Kelly criterion for determining optimal bet size doesn’t apply to them! (SBF did a famous thread on this and Caroline’s tumblr has that line about how real EAs endorse high leverage and double-or-nothing flips).
I think it would be wrong to blame utilitarianism per se for what happened because the vast majority of utilitarians absolutely do care about the risk of ruin—as they should—but I think SBF’s own brand of EA-aligned thinking (I assume short AI/bioengineered pandemic timelines factored in here) played a huge role in why he took such insane risks.
Did you mean “low moral expectations” instead of “low moral standards?”
I mean both.