Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)
While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling. My run-through of the perceived problems within EA that the author describes and my reactions:
The dominance of philosophy. I personally find parts of long-termism kooky and I’m not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)… it feels more like shoehorning a critique of EA in to a piece on SBF?
Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization.
If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to.
I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.]
3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but:
a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?]
b) More importantly… the author doesn’t explain how unabashed utilitarianism led to SBF’s actions—it’s sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow.
4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF’s take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added). SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.
One EA critique in the piece that resonated with me—and I’m not sure I’d seen put so succinctly elsewhere is:
“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA.”
While not about SBF, it’s a point I don’t see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it.
TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or—perhaps more likely—editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting EA to SBF’s actions.
“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA.”
(Was that originally in the article? If so it’s been edited now)
Regardless, I’ve been concerned for years about the perverse incentives for (EA) academics both to produce weird ideas and to end the discussion of those ideas with ‘more research necessary’. While I also disagree with much of the article, I’m glad to finally see that sentiment in print. It needs to be discussed much more IMO.
Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)
Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)
While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling. My run-through of the perceived problems within EA that the author describes and my reactions:
The dominance of philosophy. I personally find parts of long-termism kooky and I’m not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)… it feels more like shoehorning a critique of EA in to a piece on SBF?
Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization.
If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to.
I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.]
3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but:
a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?]
b) More importantly… the author doesn’t explain how unabashed utilitarianism led to SBF’s actions—it’s sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow.
4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF’s take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added). SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.
One EA critique in the piece that resonated with me—and I’m not sure I’d seen put so succinctly elsewhere is:
While not about SBF, it’s a point I don’t see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it.
TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or—perhaps more likely—editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting EA to SBF’s actions.
(Was that originally in the article? If so it’s been edited now)
Regardless, I’ve been concerned for years about the perverse incentives for (EA) academics both to produce weird ideas and to end the discussion of those ideas with ‘more research necessary’. While I also disagree with much of the article, I’m glad to finally see that sentiment in print. It needs to be discussed much more IMO.
Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)