(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a“social experiment in radical honesty and perfect transparency” ,which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
(I’m going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification’s sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli—there is a lot of inferential distance between us and that’s ok, the world is wide enough to handle that! I don’t mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet… I can’t help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they don’t like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
I’m also not sure what to make of Habryka’s response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a “social experiment in radical honesty and perfect transparency” , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, I’m not really interested in that whole scene. I’m more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/Alameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think they’re best described as ‘risky beneficentrism’ - see my comment here and Ryan’s original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from ‘quietly hope it goes away’?
Writing it down, 2.b. strikes me as what I mean by ‘naive consequentialism’ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that he’d do more good being part of/tied to EA than otherwise. The kind of signalling you described as naive consequentialism doesn’t really seem pertinent to me here, as interesting as the philosophical discussion can be.
tl’dr—I think there can be a difference between a discussion about what norms EA ‘should’ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think that’s different from the ‘minimal viable information-sharing’ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/Oli pushing for it, but sometimes I wish you would both also be less vague too. Some of us don’t have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/organisations, not ideas) post FTX. Wounds might have faded, but they haven’t healed 😞
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?