As mentioned in that section, we clearly didn’t prioritise or eliminate this risk—we’re reflecting on how much that was a mistake, versus a good decision (to focus on other significant risks).
Hm, if you don’t mind the question — what are other significant risks you are thinking about? IMO, the FTX fraud scheme is a big deal if anything deserves to be called a big deal, from what I’ve read in the devastating legal documents about the situation, and given how SBF was framed as a poster child of EA and was a significant funder.
The only more significant risk I can think of is EA funding dangerous AI capabilities research via Anthropic, but that isn’t even unrelated to FTX (since FTX slash Alameda slash their leadership was Anthropic’s main funder). Also, my guess is that mitigating AI risk is not within the Community Health team’s scope.
Hm, if you don’t mind the question — what are other significant risks you are thinking about? IMO, the FTX fraud scheme is a big deal if anything deserves to be called a big deal, from what I’ve read in the devastating legal documents about the situation, and given how SBF was framed as a poster child of EA and was a significant funder.
The only more significant risk I can think of is EA funding dangerous AI capabilities research via Anthropic, but that isn’t even unrelated to FTX (since FTX slash Alameda slash their leadership was Anthropic’s main funder). Also, my guess is that mitigating AI risk is not within the Community Health team’s scope.