I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it’s crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA’s bet.
As someone who has spent years spreading the message that humans are very prone to self-serving biases (hopefully this is an acceptable paraphrase of some complex ideas!), I’ve personally been surprised to see your many posts in the forum right now which seem to confidently assert that the outcome was both unforeseeable and unrelated to rationalist ideas (therefore making EAs including yourself purely victims, rather than potentially also causal agents here).
To me, there seems a really plausible path from ideas about the extreme urgency of AI alignment research & the importance of taking “extreme” personal agency (relative to existing social norms) to a group of people taking on extreme risks with a lot of urgency and high personal agency to raise funds for AI alignment research.
I have no connection to any of the people involved and no way to know whether it’s what happened in this case, I’m just saying that it seems like a plausible path to what happened here given the publicly available information, and I’m curious whether that’s something you’ve considered.
I think EAs could stand to learn something from non-EAs here, about how not to blame the victim even when the victim is you.
I agree the victim-perpetrator is an important lens through which to view this saga. But, I also think that an investor-investee framing is another important one; a framing that has different prescriptions for what lessons to take away, and what to do next. The EA community staked easily a billion dollars worth of its assets (in focus, time, reputation, etc.), and ended up losing it all. I think it’s crucial to reflect on whether the extent of our due diligence and risk management was commensurate with the size of EA’s bet.
As someone who has spent years spreading the message that humans are very prone to self-serving biases (hopefully this is an acceptable paraphrase of some complex ideas!), I’ve personally been surprised to see your many posts in the forum right now which seem to confidently assert that the outcome was both unforeseeable and unrelated to rationalist ideas (therefore making EAs including yourself purely victims, rather than potentially also causal agents here).
To me, there seems a really plausible path from ideas about the extreme urgency of AI alignment research & the importance of taking “extreme” personal agency (relative to existing social norms) to a group of people taking on extreme risks with a lot of urgency and high personal agency to raise funds for AI alignment research.
I have no connection to any of the people involved and no way to know whether it’s what happened in this case, I’m just saying that it seems like a plausible path to what happened here given the publicly available information, and I’m curious whether that’s something you’ve considered.