I think a lot will depend on the board justification. If Ilya can say âweâre pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldnât be trusted to manage this safelyâ with proof that might workâbut then why not say that?[1]
If itâs just âwe decided to go in a different directionâ, then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did
I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset
Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
If Ilya can say âweâre pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldnât be trusted to manage this safelyâ with proof that might workâbut then why not say that?
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/âLW forum, which is not enough evidence at all in the corporate world/âlegal world, and to be quite frank, the EA/âLW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/âthe legal system.
More generally, EA/âLW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesnât play well with the corporate/âlegal system.
This is admittedly a less charitable take than say, Lukas Gloorâs take.
This is admittedly a less charitable take than say, Lukas Gloorâs take.
Haha, I was just going to say that Iâd be very surprised if the people on the OpenAI board didnât have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines.
TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. Thatâs something that EA opinion leaders could maybe think about and address.
But I donât think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, Iâm strongly disagreeing with the idea that itâs likely that the board âbasically had no evidence except speculation from the EA/âLW forumâ. I think one thing EA is unusually good at â or maybe I should say âsome/âmany parts of EA are unusually good atâ â is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isnât any groupthink among EAs. Also, âunusually goodâ isnât necessarily that high of a bar.])
I donât know for sure what they did or didnât consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I donât know much about Tasha. Iâve briefly met Helen but either didnât speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)
While I generally agree that they almost certainly have more information on what happened, which is why Iâm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe itâs slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and thatâs going to conflict with corporate/âlegal standards of evidence.
But I donât think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, Iâm strongly disagreeing with the idea that itâs likely that the board âbasically had no evidence except speculation from the EA/âLW forumâ. I think one thing EA is unusually good at â or maybe I should say âsome/âmany parts of EA are unusually good atâ â is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isnât any groupthink among EAs. Also, âunusually goodâ isnât necessarily that high of a bar.])
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they donât need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed itâs decision, which would be evidence for the theory that they couldnât make a case that actually abided to the standard of evidence for a corporate/âlegal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldnât explain themselves in a way that would abide by the strictness of a corporate/âlegal standard of evidence.
Edit: The firing wasnât because of safety related concerns.
I think a lot will depend on the board justification. If Ilya can say âweâre pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldnât be trusted to manage this safelyâ with proof that might workâbut then why not say that?[1]
If itâs just âwe decided to go in a different directionâ, then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did
I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset
Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/âLW forum, which is not enough evidence at all in the corporate world/âlegal world, and to be quite frank, the EA/âLW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/âthe legal system.
More generally, EA/âLW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesnât play well with the corporate/âlegal system.
This is admittedly a less charitable take than say, Lukas Gloorâs take.
Haha, I was just going to say that Iâd be very surprised if the people on the OpenAI board didnât have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines.
TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. Thatâs something that EA opinion leaders could maybe think about and address.
But I donât think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, Iâm strongly disagreeing with the idea that itâs likely that the board âbasically had no evidence except speculation from the EA/âLW forumâ. I think one thing EA is unusually good at â or maybe I should say âsome/âmany parts of EA are unusually good atâ â is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isnât any groupthink among EAs. Also, âunusually goodâ isnât necessarily that high of a bar.])
I donât know for sure what they did or didnât consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I donât know much about Tasha. Iâve briefly met Helen but either didnât speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)
While I generally agree that they almost certainly have more information on what happened, which is why Iâm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe itâs slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and thatâs going to conflict with corporate/âlegal standards of evidence.
I agree with this weakly, in the sense that being high up in EA is at least a slight update towards them actually thinking through things and being able to make actual cases. My disagreement here is that this effect is probably not strong enough to wash away the cultural effects of operating in a cause area where they donât need to meet any standard of evidence except long-winded blog posts and getting rewarded, for many reasons.
Also, the board second-guessed itâs decision, which would be evidence for the theory that they couldnât make a case that actually abided to the standard of evidence for a corporate/âlegal setting.
If it was any other cause like say GiveWell or some other causes in EA, I would trust them much more that they do have good reason. But AI safety has been so reliant on very low-non-existent standards of evidence or epistemics that they probably couldnât explain themselves in a way that would abide by the strictness of a corporate/âlegal standard of evidence.
Edit: The firing wasnât because of safety related concerns.
Why did you unendorse?
I unendorsed primarily because apparently, the board didnât fire because of safety concerns, though Iâm not sure this is accurate.