(Iâm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarificationâs sake. happy to discuss further with you Rob or anyone via DMs/âForum Dialogue/âwhatever)
(to Rob & Oliâthere is a lot of inferential distance between us and thatâs ok, the world is wide enough to handle that! I donât mean to come off as rude/âhostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet⌠I canât help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they donât like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
Iâm also not sure what to make of Habrykaâs response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as aâsocial experiment in radical honesty and perfect transparencyâ ,which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, Iâm not really interested in that whole scene. Iâm more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/âAlameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think theyâre best described as ârisky beneficentrismâ - see my comment here and Ryanâs original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from âquietly hope it goes awayâ?
Writing it down, 2.b. strikes me as what I mean by ânaive consequentialismâ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that heâd do more good being part of/âtied to EA than otherwise. The kind of signalling you described as naive consequentialism doesnât really seem pertinent to me here, as interesting as the philosophical discussion can be.
tlâdrâI think there can be a difference between a discussion about what norms EA âshouldâ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think thatâs different from the âminimal viable information-sharingâ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/âOli pushing for it, but sometimes I wish you would both also be less vague too. Some of us donât have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/âorganisations, not ideas) post FTX. Wounds might have faded, but they havenât healed đ
(Iâm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarificationâs sake. happy to discuss further with you Rob or anyone via DMs/âForum Dialogue/âwhatever)
(to Rob & Oliâthere is a lot of inferential distance between us and thatâs ok, the world is wide enough to handle that! I donât mean to come off as rude/âhostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet⌠I canât help but still feel some of the frustrations of my original comment. Why does this person not want to share their thoughts publicly? Is it because they donât like the EA Forum? Because their scared of retaliation? It feels like this would be useful and important information for the community to know.
Iâm also not sure what to make of Habrykaâs response here and elsewhere. I think there is a lot of inferential distance between myself and Oli, but it does seem to me to come off as a âsocial experiment in radical honesty and perfect transparencyâ , which is a vibe I often get from the Lightcone-adjacent world. And like, with all due respect, Iâm not really interested in that whole scene. Iâm more interested in questions like:
Were any senior EAs directly involved in the criminal actions at FTX/âAlameda?
What warnings were given about SBF to senior EAs before the FTX blowup, particularly around the 2018 Alameda blowup, as recounted here.
If these warnings were ignored, what prevented people from deducing that SBF was a bad actor?[1]
Critically, if these warnings were accepted as true, who decided to keep this a secret and to supress it from the community at large, and not act on it?
Why did SBF end up with such a dangerous set of beliefs about the world? (I think theyâre best described as ârisky beneficentrismâ - see my comment here and Ryanâs original post here)
Why have the results of these investigations, or some legally-cleared version, not been shared with the community at large?
Do senior EAs have any plan to respond to the hit to EA-morale as a result of FTX and the aftermath, along with the intensely negative social reaction to EA, apart from âquietly hope it goes awayâ?
Writing it down, 2.b. strikes me as what I mean by ânaive consequentialismâ if it happened. People had information that SBF was a bad character who had done harm, but calculated (or assumed) that heâd do more good being part of/âtied to EA than otherwise. The kind of signalling you described as naive consequentialism doesnât really seem pertinent to me here, as interesting as the philosophical discussion can be.
tlâdrâI think there can be a difference between a discussion about what norms EA âshouldâ have, or senior EAs should act by, especially in the post-FTX and influencing-AI-policy world, but I think thatâs different from the âminimal viable information-sharingâ that can help the community heal, hold people to account, and help make the world a better place. It does feel like the lack of communication is harming that, and I applaud you/âOli pushing for it, but sometimes I wish you would both also be less vague too. Some of us donât have the EA history and context that you both do!
epilogue: I hope Rebecca is doing well. But this post & all the comments makes me feel more pessimistic about the state of EA (as a set of institutions/âorganisations, not ideas) post FTX. Wounds might have faded, but they havenât healed đ
Not that people should have guessed the scale of his wrongdoing ex-ante, but was there enough to start to downplay and disassociate?