I think answers to this are highly downstream of object-level positions.
If you think timelines are short and scaled-up versions of current architectures will lead to AGI, then ‘what went wrong’ is contributing to vastly greater chance of extinction.
If you don’t agree with the above, then ‘what went wrong’ is overly dragging EA’s culture and perception to be focused on AI-Safety, and causing great damage to all of EA (even non-AI-Safety parts) when the OpenAI board saga blew up in Toner & McCauleys’ faces.
Lessons are probably downstream of this diagnosis.
My general lesson aligns with Bryan’s recent post—man is EA bad about communicating what it is, and despite the OpenAI fiasco not being an attempted EA-coup motivated by Pascal’s mugging longtermist concerns, it seems so many people have that as a ‘cached explanation’ of what went on. Feels to me like that is a big own goal and was avoidable.
Also on OpenAI, I think it’s bad that people like Joshua Achiam who do good work at OpenAI seem to really dislike EA. That’s a really bad sign—feels like the AI Safety community could have done more not to alienate people like him maybe.
I think answers to this are highly downstream of object-level positions.
If you think timelines are short and scaled-up versions of current architectures will lead to AGI, then ‘what went wrong’ is contributing to vastly greater chance of extinction.
If you don’t agree with the above, then ‘what went wrong’ is overly dragging EA’s culture and perception to be focused on AI-Safety, and causing great damage to all of EA (even non-AI-Safety parts) when the OpenAI board saga blew up in Toner & McCauleys’ faces.
Lessons are probably downstream of this diagnosis.
My general lesson aligns with Bryan’s recent post—man is EA bad about communicating what it is, and despite the OpenAI fiasco not being an attempted EA-coup motivated by Pascal’s mugging longtermist concerns, it seems so many people have that as a ‘cached explanation’ of what went on. Feels to me like that is a big own goal and was avoidable.
Also on OpenAI, I think it’s bad that people like Joshua Achiam who do good work at OpenAI seem to really dislike EA. That’s a really bad sign—feels like the AI Safety community could have done more not to alienate people like him maybe.