When will we learn? I feel that we haven’t taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects.
Huh? What’s the lesson from FTX that would have improved the OpenAI situation?
To the extent that EA can be considered a single agent that can learn and act, I feel like ‘we’ just made an extraordinary effort to remove a single revered individual, an effort that most people regard as extremely excessive. What more would you have the board have done? I can see arguments that it could have been done more skillfully (though these seem like monday morning quarterbacking, and are made on incomplete information), but the magnitude and direction seem like what you are looking for?
The board did great, I’m very happy we had Tasha and Helen on board to make AI safety concerns prevail.
What I’ve been saying from the start is that this opinion isn’t what I’ve seen on Twitter threads within the EA/rationalist community (I don’t give credits to Tweets but I can’t deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman’s return and heavily criticizing the board for their decision (I don’t agree with the shadiness that surrounds the board’s decision, but I nevertheless think it’s a good decision).
Huh? What’s the lesson from FTX that would have improved the OpenAI situation?
Don’t trust lost-canony individuals? Don’t revere a single individual and trust him with deciding the fate of a such an important org?
To the extent that EA can be considered a single agent that can learn and act, I feel like ‘we’ just made an extraordinary effort to remove a single revered individual, an effort that most people regard as extremely excessive. What more would you have the board have done? I can see arguments that it could have been done more skillfully (though these seem like monday morning quarterbacking, and are made on incomplete information), but the magnitude and direction seem like what you are looking for?
The board did great, I’m very happy we had Tasha and Helen on board to make AI safety concerns prevail.
What I’ve been saying from the start is that this opinion isn’t what I’ve seen on Twitter threads within the EA/rationalist community (I don’t give credits to Tweets but I can’t deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman’s return and heavily criticizing the board for their decision (I don’t agree with the shadiness that surrounds the board’s decision, but I nevertheless think it’s a good decision).