I feel less strongly that this is an “unusually important question” that needs an accurate / precise answer.
It seems like both A and B are bad scenarios that the EA movement should be more robust against, and it seems clear that regardless of which scenario (or some other possibility/combination) was true, the EA movement has room to improve when it comes to preventing / mitigating the harms from such risks.
I think, rather than over-indexing on the minutiae of SBF’s personal philosophy or psyche, it’s probably more useful for the EA movement to think about how it can strengthen itself against movement-related risks generally going forward. It’s probably more useful for those steering the EA movement to consider things like more transparent systems and better governance, to find ways to reduce the risk of any one individual or small groups of people taking actions that bring risks to the entire EA movement, to try work out what else might lead to large gaps between what the “EA ideal” is and what “EA-in-practice” could end up looking like.
I feel less strongly that this is an “unusually important question” that needs an accurate / precise answer.
It seems like both A and B are bad scenarios that the EA movement should be more robust against, and it seems clear that regardless of which scenario (or some other possibility/combination) was true, the EA movement has room to improve when it comes to preventing / mitigating the harms from such risks.
I think, rather than over-indexing on the minutiae of SBF’s personal philosophy or psyche, it’s probably more useful for the EA movement to think about how it can strengthen itself against movement-related risks generally going forward. It’s probably more useful for those steering the EA movement to consider things like more transparent systems and better governance, to find ways to reduce the risk of any one individual or small groups of people taking actions that bring risks to the entire EA movement, to try work out what else might lead to large gaps between what the “EA ideal” is and what “EA-in-practice” could end up looking like.
[written hastily, not very confident]