The thing that stands out to me as clearly seeming to go wrong is the lack of communication from the board during the whole debacle. Given that the final best guess at the reasoning for their decision seems like something could have explained[1], it does seem like an own goal that they didnât try to do so at the time.
They were getting clear pressure from the OpenAI employees to do this for instance, this was one of the main complaints in the employee letter, and from talking to a couple of OAI employees Iâm fairly convinced that this was sincere (i.e. they were just as in the dark as everyone else, and this was at least one of their main frustrations).
Iâve heard a few people make a comparison to other CEO-stepping-down situations, where itâs common for things to be relatively hush hush and âtaking time out to spend with their familyâ. I think this isnât a like for like comparison, because in those cases itâs usually a mutual agreement between the board and the CEO for them both to save face and preserve the reputation of the company. In the case of a sudden unilateral firing it seems more important to have your reasoning ready to explain publicly (or even privately, to the employees).
Itâs possible of course that there are some secret details that explain this behaviour, but I donât think thereâs any reason to be overly charitable in assuming this. If there was some strategic tradeoff that the board members were making itâs hard to see what they were trading off against because they donât seem to have ended up with anything in the deal[2]. I also donât find âsafety-related secretâ explanations that compelling because I donât see why they couldnât have said this (that there was a secret, not what it was). Everyone involved was very familiar with the idea that AI safety infohazards might exist so this would have been a comprehensible explanation.
If I put myself in the position of the board members I can much more easily imagine feeling completely out of my depth in the situation that happened and ill-advisedly doubling down on this strategy of keeping quiet. Itâs also possible they were getting bad advice to this effect, as lawyers tend to tell you to keep quiet, and there is general advice out there to ânot engage with the twitter mobâ.
Several minor fibs from Sam, saying different things to different board members to try and manipulate them. This does technically fit with the ânot consistently candidâ explanation but that was very cryptic without further clarification and examples
To frame this the other way, if they had kept quiet and then been given some lesser advisory position in the company afterwards you could more easily reason that some face-saving dealing had gone on
I wish I could signal boost this comment more. DâAngelo, Toner, and McCauley had a reason which was enough to persuade Ilya to take the drastic move of summarily removing Brockman from the board and then firing Sam, without any foresight and no communication whatsoever to the rest of the company, OpenAI stakeholders, or even Emmett (he threatened to resign unless they told him,[1] though not sure where that left off), which lost them legitimacy both internally and externally, and eventually lost them everything.
I honestly donât have a plausible reason for it, or an belief they might have had which would help to square this circle, especially since I donât buy the âAGI has been achieved internallyâ stuff. I honestly think that your explanation of not realising it was going to go so nuclear,[2] and then just doing whatever the lawyers told them, is what happened. But if so, the lawyersâ strategy of âsay nothing, do nothing, just collect evidence and be quietâ was just an absolute disaster both for their own strategy and EAâs reputation and legitimacy as a whole.[3] It honestly just seems like staggering incompetency to me,[4] and the continued silence is just the most perplexing part of the entire saga, and Iâm still (obviously) exercised by it.
Toner and McCauley remain as 2 members of the 5 person advisory board for The Centre for the Governance of AI. Any implications are left as an exercise for the reader.
The dangers of policy set by lawyers reminds me of the Heads of Harvard, UPenn, and MIT having a disaster in front of Congress about the Israel/âPalestine/âfree-speech/âgenocide hearing, possibly because they were repeating some legally cleared lines about what the could say to limit liability instead of saying âgenocide is bad and advocating it is against our code of conductâ
The thing that stands out to me as clearly seeming to go wrong is the lack of communication from the board during the whole debacle. Given that the final best guess at the reasoning for their decision seems like something could have explained[1], it does seem like an own goal that they didnât try to do so at the time.
They were getting clear pressure from the OpenAI employees to do this for instance, this was one of the main complaints in the employee letter, and from talking to a couple of OAI employees Iâm fairly convinced that this was sincere (i.e. they were just as in the dark as everyone else, and this was at least one of their main frustrations).
Iâve heard a few people make a comparison to other CEO-stepping-down situations, where itâs common for things to be relatively hush hush and âtaking time out to spend with their familyâ. I think this isnât a like for like comparison, because in those cases itâs usually a mutual agreement between the board and the CEO for them both to save face and preserve the reputation of the company. In the case of a sudden unilateral firing it seems more important to have your reasoning ready to explain publicly (or even privately, to the employees).
Itâs possible of course that there are some secret details that explain this behaviour, but I donât think thereâs any reason to be overly charitable in assuming this. If there was some strategic tradeoff that the board members were making itâs hard to see what they were trading off against because they donât seem to have ended up with anything in the deal[2]. I also donât find âsafety-related secretâ explanations that compelling because I donât see why they couldnât have said this (that there was a secret, not what it was). Everyone involved was very familiar with the idea that AI safety infohazards might exist so this would have been a comprehensible explanation.
If I put myself in the position of the board members I can much more easily imagine feeling completely out of my depth in the situation that happened and ill-advisedly doubling down on this strategy of keeping quiet. Itâs also possible they were getting bad advice to this effect, as lawyers tend to tell you to keep quiet, and there is general advice out there to ânot engage with the twitter mobâ.
Several minor fibs from Sam, saying different things to different board members to try and manipulate them. This does technically fit with the ânot consistently candidâ explanation but that was very cryptic without further clarification and examples
To frame this the other way, if they had kept quiet and then been given some lesser advisory position in the company afterwards you could more easily reason that some face-saving dealing had gone on
I wish I could signal boost this comment more. DâAngelo, Toner, and McCauley had a reason which was enough to persuade Ilya to take the drastic move of summarily removing Brockman from the board and then firing Sam, without any foresight and no communication whatsoever to the rest of the company, OpenAI stakeholders, or even Emmett (he threatened to resign unless they told him,[1] though not sure where that left off), which lost them legitimacy both internally and externally, and eventually lost them everything.
I honestly donât have a plausible reason for it, or an belief they might have had which would help to square this circle, especially since I donât buy the âAGI has been achieved internallyâ stuff. I honestly think that your explanation of not realising it was going to go so nuclear,[2] and then just doing whatever the lawyers told them, is what happened. But if so, the lawyersâ strategy of âsay nothing, do nothing, just collect evidence and be quietâ was just an absolute disaster both for their own strategy and EAâs reputation and legitimacy as a whole.[3] It honestly just seems like staggering incompetency to me,[4] and the continued silence is just the most perplexing part of the entire saga, and Iâm still (obviously) exercised by it.
Toner and McCauley remain as 2 members of the 5 person advisory board for The Centre for the Governance of AI. Any implications are left as an exercise for the reader.
https://ââwww.bloomberg.com/âânews/ââarticles/ââ2023-11-21/ââaltman-openai-board-open-talks-to-negotiate-his-possible-return
But how could they not realise this???
The dangers of policy set by lawyers reminds me of the Heads of Harvard, UPenn, and MIT having a disaster in front of Congress about the Israel/âPalestine/âfree-speech/âgenocide hearing, possibly because they were repeating some legally cleared lines about what the could say to limit liability instead of saying âgenocide is bad and advocating it is against our code of conductâ
Nothing has really changed my mind here