I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA’s public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.
But as far as I’ve seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and then in helping to launch Anthropic—both of which have proven to be far, far less committed to serious AI safety ethics than they’d promised, and far less than we’d hoped.
In my view, accelerating the development of AGI, by giving the EA seal of approval to first OpenAI and then Anthropic, has done far, far more damage to humanity’s likelihood of survival than the FTX fiasco ever did. But of course so many EAs go on to get lucrative jobs at OpenAI and Anthropic, and 80,000 Hours is delighted to host such job ads, that EA as a career-advacement movement is locked into the belief that ‘technical AI safety research’ within ‘frontier AI labs’ is a far more valuable use of bright young people’s talents than merely promoting grass-roots AI safety advocacy.
Let me know if that captures any of your frustration. It might help EAs understand why this double standard—taking huge responsibility for SBF/FTX turning reckless and evil, but taking virtually no responsibility for OpenAI/Anthropic turning reckless and evil—is so grating to you (and me).
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…
Holly --
I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA’s public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.
But as far as I’ve seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and then in helping to launch Anthropic—both of which have proven to be far, far less committed to serious AI safety ethics than they’d promised, and far less than we’d hoped.
In my view, accelerating the development of AGI, by giving the EA seal of approval to first OpenAI and then Anthropic, has done far, far more damage to humanity’s likelihood of survival than the FTX fiasco ever did. But of course so many EAs go on to get lucrative jobs at OpenAI and Anthropic, and 80,000 Hours is delighted to host such job ads, that EA as a career-advacement movement is locked into the belief that ‘technical AI safety research’ within ‘frontier AI labs’ is a far more valuable use of bright young people’s talents than merely promoting grass-roots AI safety advocacy.
Let me know if that captures any of your frustration. It might help EAs understand why this double standard—taking huge responsibility for SBF/FTX turning reckless and evil, but taking virtually no responsibility for OpenAI/Anthropic turning reckless and evil—is so grating to you (and me).
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…