Reading McAskill’s AMA from 4 years ago about what would kill EA, I can’t help but find his predictions chillingly realistic!
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism) = OpenAI reshuffling and general focus on AI safety has increased the caution of the mainstream public towards EA
A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate) = SBF debacle!
Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion = This one hasn’t happened yet but is the obviously structural one, still has a chance to happen
When will we learn? I feel that we haven’t taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects. Also, as a community builder who talks to a lot of people and who does outreach, I hear a lot of bad criticism concerning EA (‘self-obsessed tech bros wasting money’), and while it’s easy to think that these people speak out of ignorance, ignoring the criticism won’t make it go away.
I would love to see more worry and more action around this.
But absolutely, and yet a big part of EAs seem to be pro-Altman! That was my point, I might not have been clear enough, thanks for calling this to attention
It’s what I’ve seen. Happy to be wrong. It’s an impression—I didn’t register in a notebook every time someone was supporting Altman, but I’ve read it quite a lot; just like you I can’t prove it.
I’m happy to be wrong—not sure downvoting me to hell will make the threats mentioned in my quick take go away though.
When will we learn? I feel that we haven’t taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects.
Huh? What’s the lesson from FTX that would have improved the OpenAI situation?
To the extent that EA can be considered a single agent that can learn and act, I feel like ‘we’ just made an extraordinary effort to remove a single revered individual, an effort that most people regard as extremely excessive. What more would you have the board have done? I can see arguments that it could have been done more skillfully (though these seem like monday morning quarterbacking, and are made on incomplete information), but the magnitude and direction seem like what you are looking for?
The board did great, I’m very happy we had Tasha and Helen on board to make AI safety concerns prevail.
What I’ve been saying from the start is that this opinion isn’t what I’ve seen on Twitter threads within the EA/rationalist community (I don’t give credits to Tweets but I can’t deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman’s return and heavily criticizing the board for their decision (I don’t agree with the shadiness that surrounds the board’s decision, but I nevertheless think it’s a good decision).
Reading McAskill’s AMA from 4 years ago about what would kill EA, I can’t help but find his predictions chillingly realistic!
The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism) = OpenAI reshuffling and general focus on AI safety has increased the caution of the mainstream public towards EA
A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate) = SBF debacle!
Fizzle—it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion = This one hasn’t happened yet but is the obviously structural one, still has a chance to happen
When will we learn? I feel that we haven’t taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects. Also, as a community builder who talks to a lot of people and who does outreach, I hear a lot of bad criticism concerning EA (‘self-obsessed tech bros wasting money’), and while it’s easy to think that these people speak out of ignorance, ignoring the criticism won’t make it go away.
I would love to see more worry and more action around this.
How specifically? Seems to me you could easily argue that SBF should make us more skeptical of charismatic leaders like Sam Altman.
But absolutely, and yet a big part of EAs seem to be pro-Altman! That was my point, I might not have been clear enough, thanks for calling this to attention
What makes you think a big part of EAs are pro-Altman? My impressions is that this is not true, and I cannot come up with any concrete example.
It’s what I’ve seen. Happy to be wrong. It’s an impression—I didn’t register in a notebook every time someone was supporting Altman, but I’ve read it quite a lot; just like you I can’t prove it.
I’m happy to be wrong—not sure downvoting me to hell will make the threats mentioned in my quick take go away though.
I don’t feel like they are pro Altman in general, but not sure. Maybe in the past they were when OpenPhil funded OpenAI not sure.
Huh? What’s the lesson from FTX that would have improved the OpenAI situation?
Don’t trust lost-canony individuals? Don’t revere a single individual and trust him with deciding the fate of a such an important org?
To the extent that EA can be considered a single agent that can learn and act, I feel like ‘we’ just made an extraordinary effort to remove a single revered individual, an effort that most people regard as extremely excessive. What more would you have the board have done? I can see arguments that it could have been done more skillfully (though these seem like monday morning quarterbacking, and are made on incomplete information), but the magnitude and direction seem like what you are looking for?
The board did great, I’m very happy we had Tasha and Helen on board to make AI safety concerns prevail.
What I’ve been saying from the start is that this opinion isn’t what I’ve seen on Twitter threads within the EA/rationalist community (I don’t give credits to Tweets but I can’t deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman’s return and heavily criticizing the board for their decision (I don’t agree with the shadiness that surrounds the board’s decision, but I nevertheless think it’s a good decision).