My general thoughts on this can be stated as: I’m mostly of the opinion that EA will survive this, bar something massively wrong like the board members willfully lying or massive fraud from EAs, primarily because most of the criticism is directed to the AI safety wing, and EA is more than AI safety, after all.
Nevertheless, I do think that this could be true for the AI safety wing, and they may have just hit a key limit to their power. In particular, depending on how this goes, I could foresee a reduction in AI safety power and influence, and IMO this was completely avoidable.
I think a lot will depend on the board justification. If Ilya can say “we’re pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn’t be trusted to manage this safely” with proof that might work—but then why not say that?[1]
If it’s just “we decided to go in a different direction”, then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did
I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset
Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
If Ilya can say “we’re pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn’t be trusted to manage this safely” with proof that might work—but then why not say that?
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/LW forum, which is not enough evidence at all in the corporate world/legal world, and to be quite frank, the EA/LW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/the legal system.
More generally, EA/LW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesn’t play well with the corporate/legal system.
This is admittedly a less charitable take than say, Lukas Gloor’s take.
This is admittedly a less charitable take than say, Lukas Gloor’s take.
Haha, I was just going to say that I’d be very surprised if the people on the OpenAI board didn’t have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines.
TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That’s something that EA opinion leaders could maybe think about and address.
But I don’t think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I’m strongly disagreeing with the idea that it’s likely that the board “basically had no evidence except speculation from the EA/LW forum”. I think one thing EA is unusually good at – or maybe I should say “some/many parts of EA are unusually good at” – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn’t any groupthink among EAs. Also, “unusually good” isn’t necessarily that high of a bar.])
I don’t know for sure what they did or didn’t consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don’t know much about Tasha. I’ve briefly met Helen but either didn’t speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)
While I generally agree that they almost certainly have more information on what happened, which is why I’m not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it’s slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that’s going to conflict with corporate/legal standards of evidence.
But I don’t think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I’m strongly disagreeing with the idea that it’s likely that the board “basically had no evidence except speculation from the EA/LW forum”. I think one thing EA is unusually good at – or maybe I should say “some/many parts of EA are unusually good at” – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn’t any groupthink among EAs. Also, “unusually good” isn’t necessarily that high of a bar.])
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they don’t need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed it’s decision, which would be evidence for the theory that they couldn’t make a case that actually abided to the standard of evidence for a corporate/legal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldn’t explain themselves in a way that would abide by the strictness of a corporate/legal standard of evidence.
Edit: The firing wasn’t because of safety related concerns.
My general thoughts on this can be stated as: I’m mostly of the opinion that EA will survive this, bar something massively wrong like the board members willfully lying or massive fraud from EAs, primarily because most of the criticism is directed to the AI safety wing, and EA is more than AI safety, after all.
Nevertheless, I do think that this could be true for the AI safety wing, and they may have just hit a key limit to their power. In particular, depending on how this goes, I could foresee a reduction in AI safety power and influence, and IMO this was completely avoidable.
I think a lot will depend on the board justification. If Ilya can say “we’re pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn’t be trusted to manage this safely” with proof that might work—but then why not say that?[1]
If it’s just “we decided to go in a different direction”, then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did
I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset
Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/LW forum, which is not enough evidence at all in the corporate world/legal world, and to be quite frank, the EA/LW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/the legal system.
More generally, EA/LW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesn’t play well with the corporate/legal system.
This is admittedly a less charitable take than say, Lukas Gloor’s take.
Haha, I was just going to say that I’d be very surprised if the people on the OpenAI board didn’t have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines.
TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That’s something that EA opinion leaders could maybe think about and address.
But I don’t think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I’m strongly disagreeing with the idea that it’s likely that the board “basically had no evidence except speculation from the EA/LW forum”. I think one thing EA is unusually good at – or maybe I should say “some/many parts of EA are unusually good at” – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn’t any groupthink among EAs. Also, “unusually good” isn’t necessarily that high of a bar.])
I don’t know for sure what they did or didn’t consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don’t know much about Tasha. I’ve briefly met Helen but either didn’t speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)
While I generally agree that they almost certainly have more information on what happened, which is why I’m not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it’s slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that’s going to conflict with corporate/legal standards of evidence.
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they don’t need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed it’s decision, which would be evidence for the theory that they couldn’t make a case that actually abided to the standard of evidence for a corporate/legal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldn’t explain themselves in a way that would abide by the strictness of a corporate/legal standard of evidence.
Edit: The firing wasn’t because of safety related concerns.
Why did you unendorse?
I unendorsed primarily because apparently, the board didn’t fire because of safety concerns, though I’m not sure this is accurate.