The board of directors of OpenAI, Inc, the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.
A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.” [emphasis added]
Found this on Reddit: Anxious_Bandicoot126 comments on Sam Altman is leaving OpenAI (reddit.com)
Obviously just speculation for now, but seems plausible. The moment the GPT store was released I thought:
“wow that’s really good for business … wow that’s really bad for alignment”
I’m skeptical.
I’ve read their other comments. The initial comment sounded somewhat plausible, but their other comments sounded less like what I’d expect someone in that position to sound like.
This seems
the most plausiblespeculation so far, though probably also wrong: https://twitter.com/dzhng/status/1725637133883547705If you think it’s more plausible than misalignment with OpenAI’s mission, you could make some mana on
Worth noting that of the 4 remaining board members, 2 are associated with EA: Helen Toner (CSET) and Tasha McCauley (EV UK board member)
This is a critically important point to hold in mind if the reason for the move seems to be due to safety concerns as opposed to personal malpractice/deceiving the board[1]
I don’t know what the hell happened. I guess further clarifications on the decision-making process and corporate landscape will be known tomorrow or, more likely, early next working week
I’ve voiced concerns before that EA is unaware that it can be drawn into ‘one-way fights’ sometimes, and this feels like another such moment. The Silicon Valley tech-twitter scene[2] has exploded over this, and so far EA is not coming out well in their eyes from what I can see. I think the days of “e/acc” being a meme movement are rapidly drawing to a close, and EA might find itself in a hostile atmosphere in what used to be one of the most EA-friendly places in the world.
Again, early speculations, but be careful out there Bay-Area EAs. Keep your wits about you.
Really strange that, while this looks like the most likely reason, it’s not really reflected in the language
Perhaps one of the few cases where Twitter might be an accurate representation of thoughts on the ground
Ironically, this particular set of comments is doing the rounds on Twitter with some banal commentary. https://twitter.com/tobi/status/1726132247227740623?t=Qu5UR4QKDz5anypwmuANwQ&s=19
🙄🙄
Yeah, this is one of the few times where I believe that the EAs on the board likely overreached here, because they probably didn’t give enough evidence to justify their excoriating statement there that Sam Altman was dishonest, and he might be coming back to lead the company.
I’m not sure how to react to all of this, though.
Edit: My reaction is just WTF happened, and why did they completely play themselves? Though honestly, I just believe that they were inexperienced.
Kudos for being uncertain, given the limited information available.
(Not something one cay say about many of the other comments to this post, sadly.)
Yeah, the tech scene really seems to come down on the side of Sam Altman already. Let’s hope the board had good grounds and will be able to demonstrate evidence of dishonesty soon
I’ve shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.
There’s some related discussion here on LW.
Do these explanations seem at odds to you for some reason? The language used in the statement does not say anything about personal malpractice/deception, just that he was “not consistently candid in his communications with the board”. It seems entirely possible to me, and indeed probably most likely given what else we now know, that the board is alleging dishonesty re: safety-related commitments he made, or something like this.
Adam D’Angelo also worked at Facebook with Moskovitz from 2004 to 2008 (incl. as CTO 2006-2008) and is on the board of Asana
Twitter is full of people laying into EA for being behind Sam Altman’s firing. However, if it’s true that this happened because the board thought Altman was trying to take the company in an ‘unsafe’ direction then I’m glad they did this. And I’m glad that for the time being considerations other than ‘shareholder value’ are not the defining motivation behind AI development.
This is incredibly short-sighted. The board’s behavior was grossly unprofessional and the accompanying blog post was borderline defamatory. And Altman is one of the most highly-connected and competent people in the Bay Area tech scene. Altman can easily start another AI company; in fact, media outlets are now reporting that he’s considering doing just that, or might even return to OpenAI by pressuring the board to resign.
In fact, Manifold is at 50% that Altman will return as CEO, and at 38% that he’ll start another AI company. It seems that the board was unable to think even just two steps ahead if they thought this would end well.
Altman starting a new company could still slow things down a few months. Which could be critically important if AGI is imminent. In those few months perhaps government regulation with teeth could actually come in, and then shut the new company down before it ends the world.
You had no evidence to justify that claim back when you made it, and as new evidence is released, it looks increasingly likely that the claim was not only unjustified but also wrong (see e.g. this comment by Gwern).
Latest (48 hours in): OpenAI Board Stands by Decision to Force Sam Altman Out of C.E.O. Role
After 48 hours of furious negotiations, the A.I. company said Mr. Altman would not return to his job and that former Twitch C.E.O. Emmett Shear would be its interim boss.
Oh wow, that last paragraph seems like a good sign that they have good grounds for these statements they’re not walking back
It seems odd for them to say that given that there were relatively credible rumours that the board was negotiating with Sam about a potential return (which we can assume broke down as they looked for an alternative CEO).[I’ve retracted the above, as it seems inaccurate with the new hiring of Shear and reports that the board just went silent in response to pressure from investors and Microsoft]
Can they not share some of the reasoning though? Like, sure, some of it may involved corporate propreitary knowledge and NDAs, but part of the reason there was such a blowback to the decision was that it seemed to come out of nowhere. People assumed another shoe was going to drop because of the manner of the board’s decision, and then it just hasn’t?
The new CEO has literally just promised to:
So he’s accepted the position without even knowing why they did what they did at a high level.[seems false, see Joshua’s reply below]While the board probably have the right to do what they did via the OpenAI Charter, the fact they are not sharing the reasons for doing so, at either a high or low level, internally or externally, means that they have lost and are continuing to lose a lot of credibility and legitimacy, regardless of the legal facts of the case.
Why do you think that the rumors that the board was negotiating with Sam was “relatively credible?” At this point, seems more likely than not to be false, eg either random fake news or a PR spin by pro-Altman VCs.
I mean I definitely agree that there’s a fog-of-war situation going on. Given some new updates here, I’ve retracted that paragraph.
Some original points were:
Things like this https://nitter.net/emilychangtv/status/1726337590901796927#m. - yes distrust the media etc etc but it seemed the main state of play
Altman’s photo wearing the guest pass—seems like an obvious “i’m coming back to return as a CEO or not at all implication”. Like he was obviously in the OpenAI offices for some reason, seems weird for it not to be negotiations with the board over something as opposed to collecting his belongings
Roon had a now-deleted tweet along the lines of “crossed the rubicon troops marching on rome” which again, implies there was an internal open-ai move to get sam back
It still find the board silence is pretty weird, and the big missing piece here.
I stand by my current belief that the radio-silence is currently damaging for the perception and support of the AI Safety cause
Update on point 2: https://nitter.net/ashleevance/status/1726457222169829838#m
It seems that the board wasn’t present when he visited. I guess what seemed to be going on were two different factions: 1) Mira Murati as interim CEO was trying to find some way to get Altman and Brockman back 2) The board was trying to find its own new CEO choice asap to foreclose any chance of Sam returning to the position
I think you are over-responding when we basically have no good information, as illustrated by the fact that you keep having to walk back claims you have made only a short time before
I take your point here John. There’s a lot that’s still to come out about the events of the weekend, and I’ve probably been a bit trigger-happy with responses. I’m going to step back from this thread and possibly the Forum as a whole for a little bit.
I do want to note that I picked up a somewhat hostile/adversarial tone to your comment (I’m not saying this was intentional). To ‘keep having to walk back claims’ seems a bit of an implied overclaim to me, especially as from my PoV it only happened twice—once seeing Ashlee Vance’s updated reporting, and the other with Joshua’s comment.
‘Walking back’ seems to also be more adversarial than just ‘corrected mistakes’ too (compare ‘you keep having to walk back claims’ vs ‘you made corrections twice’. In any case, while the reporting has changed, a lot of my intuitions and feelings haven’t shifted much. I still find the board’s complete silence strange, and think this could be a precarious moment for AI Safety.
I don’t think this is correct, from the same statement:
Thanks for this, have retracted that sentence.
Feels like some version of the reasoning should be made available to investors/microsoft/the public is some short-term timeframe though? I feel like that would do a fair amount to quell some of the reactions
I would like that, however, how much they care about external reactions is unclear to me
How on earth does one reconcile this with the fact that Ilya has now publicly tweeted that he deeply regrets his involvement in the board’s actions, and that he has signed the open letter threatening to quit unless the board resigns?
An open letter from 500 of ~700 OpenAI employees to the board, calling on them to resign (also on The Verge).
Suggests there’s an enormous amount of bad feeling about the decision internally. It also seems like a bad sign that the board was unwilling to provide any ‘written evidence’ of wrongdoing, though maybe something will appear in the coming days.
But all told it looks pretty bad for EA. Seems like there’s an enormous backlash online—initially against OpenAI for firing everyone’s favourite AI CEO, and now against “EA” “woke” “decelerationist” types.[1][2]
It’s also seemed to trigger a flurry of tweets from Nick Cammarata, saying that EAs are overwhelmingly self-flagellating and self-destructive and that EA caused him and his friends enormous harm. I think his claims are flatly wrong (though they may be true for him and his friends), and some of the replies seem to agree, but it has 500K views as I publish.
Seems like the whole episode (combined with at least one prominent EA seemingly saying it’s emblematic dreadful and toxic) has the potential to cause a lot of reputational damage, especially if the board chooses not to clarify its actions (although it’s possibly too late for that).
https://x.com/brian_armstrong/status/1725924114190536825?s=46
https://x.com/atroyn/status/1725937945444757720?s=46
It is a disaster for EA. We need the EAs on the board to explain themselves, and if they made a mistake, just admit that they made a mistake and step down.
“Effective altruism” depends on being effective. If EA is just putting people in charge of other peoples’ money, they make decisions that seem like bad decisions, they never explain why, refuse to change their mind whatever happens… that’s no better than existing charities! This is what EA was supposed to prevent! We are supposed to be effective. Not to fire the best employees and destroy a company that is putting an incredible amount of effort into doing responsible things.
I might as well give my money to the San Francisco Symphony. At least they won’t spend it ruining things that I care about.
Please, anyone who knows Helen or Tasha, ask them to reconsider.
I don’t think that they own the EA community an explanation (it would be nice, but they don’t have to). The only people that can have a right to demand that are the people that have appointed them there and the OAI staff.
https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money
>I might as well give my money to the San Francisco Symphony. At least they won’t spend it ruining things that I care about.
It is your right, but I don’t know how this is related? How have they spent EA donors’ money? If you are referring to the Open Phil $30M grant, Open Phil doesn’t take donations so they can donate to whoever they want and don’t need to explain themselves. It would have been different if Open AI was spending GiveWell’s money.
I make this speculative comment with no inside information
There may be a world in which this is net positive. If EAs have been wrong the whole time about the best approach being the “narrow” or “inside” game, this might force EAs into being mostly adversarial vs. Tech accelerationists and many in silicon valley in general. This could be more effective at stopping or slowing doom in the medium to long term than trying to force safety from the inside against strong market forces.
It could even help the EA AI risk crowd come more alongside the sentiment of the general public, after the initial reputational loss simmers down.
I’m not saying this is even likely, it’s just a different take.
FYI — lots of relevant links collected here: OpenAI: The Battle of the Board and OpenAI: Facts from a Weekend
Very interested to find out some of the details here:
Why now? Was there some specific act of wrongdoing that the board discovered (if so, what was it?), or was now an opportune time to make a move that the board members had secretly been considering for a while, or etc?
Was this a pro-AI-safety move that EAs should ultimately be happy about (ie, initiated by the most EA-sympathetic board members, with the intent of bringing in more x-risk-conscious leadership)? Or is this a disaster that will end up installing someone much more focused on making money than on talking to governments and figuring out how to align superintelligence? Or is it relatively neutral from an EA / x-risk perspective? (Update: first speculation I’ve seen is this cautiously optimistic tweet from Eliezer Yudkowsky)
Greg Brockman, president of the board, is also stepping down. How might this be related, and what might this tell us about the politics of the board members and who supported/opposed this decision?
Side note: Greg held two roles: chair of the board, and president. It sounds like he was fired from the former and resigned from the latter role.
Regarding the second question, I made this prediction market: https://manifold.markets/JonasVollmer/in-a-year-will-we-think-that-sam-al?r=Sm9uYXNWb2xsbWVy
Nice! I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.
From this article:
If this is true, then I think the board has made a huge mess of things. They’ve taken a shot without any ammunition, and not realised that the other parties can shoot back. Now there are mass resignations, Microsoft is furious, seemingly all of silicon valley has turned against EA, and it’s even looking likely that Altman comes back.
It seems like they didn’t think they had to act like the boards of other billion dollar companies (notifying your partners of big decisions, being literal instead of euphemistic when discussing reasons for firing, selling your decisions with PR, etc). But often norms and customs happen for a reason, and corporate governance seems to be no exception.
I think it’s premature to judge things based on the little information that’s currently available. I would be surprised if there weren’t reasons for the board’s unconventional choices. (I’m not ruling it out though, that what you say ends up being right)
How much of this is “according to anonymous sources”?
The Board was deeply aware of intricate details of other parties’s will and ability to shoot back. Probably nobody was aware of all of the details, since webs of allies are formed behind closed doors and rearrange during major conflicts, and since investors have a wide variety of retaliatory capabilities that they might not have been open about during the investment process.
What is your current view given how things have developed? Why do you keep putting forward strong views that are based on very bad information?
The board must have thought things through in detail before pulling the trigger, so I’m still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don’t.
If not, all of this indeed seems like a very questionable move.
I agree. At first I thought there must be a sex scandal or embezzlement or something. But if there’s no malfeasance here, the board has made a huge mess of things.
It’s embarassing for the EA movement, too. It’s another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.
Probably disagree? Hard to say for sure since we lack details, but it’s not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit’s mission, i.e., to ensure that AGI benefits all of humanity. (They do not have a duty to ensure OAI is valued at billions of dollars, except in so far as that helps further its mission.)
If the board members had reason to believe that Sam Altman was acting contrary to OAI’s mission of ensuring that AGI benefits all humanity, perhaps moving to fire him was the responsible thing to do (even if it turns out to be bad ex post), and what has been irresponsible are the efforts of investors and others to try to reinstate him. I guess we will know better within the next weeks, but I think it’s premature to say that the board acted irresponsibly right now.
This could end up also having really bad consequences for the goals of EA, so it’s perhaps similar to FTX in that way (but things are still developing and it might somehow turn out well).
Or maybe you feel like the board displayed inexperience and that they were in over their heads. I can probably get behind that based on how things look right now (but there’s a chance we learn more details later that put things into a different light).
Still, I feel like inexperience is only unforgivable if it comes combined with hubris. Many commenters seem to think that there must have been hubris involved on the board’s part. To me, that feels like it’s why people seem so mad about this. “Why else would the board have the audicity to oust such a successful and respected CEO, if they cannot point to any smoking-gun-type thing that justifies firing him to the world?”
But notice how that attitude – being risk averse and inclined to just let the experienced tech CEO do his thing without pushback (and possibly amass leverage over the rest of the company and the board by starting or investing into compute startups or stuff like that, as some of the rumors seem to indicate) – is also dangerous and potentially “irresponsible.” It’s not the by-default safe option, after forming concerns about his suitability, to let Sam Altman continue to cash in from the reputational benefits of running OpenAI with the seal of approval from this public good, non-profit, beneficial-mission-focused board structure that OpenAI has installed. This board structure has, from the very start, served as a kind of seal of approval that guarantees a significant amount of goodwill to people who would look at OpenAI skeptically and think “these tech people put the world at risk to attain power/money/the top spot in history.” EAs were arguably quite crucial (via getting Elon Musk to think about AI risk as well as some other pathways) in helping to set up OpenAI with such a board structure and the reputational protection against scrutiny from a concerned public (especially now that AI risk is gaining traction after chat-gpt spooked a bunch of people) that comes with that. So, I mainly want to point out that it’s not obviously “the responsible choice” to not step in when Sam Altman would otherwise de facto keep benefitting from this board structure’s seal of approval, especially if the board that was put in place no longer feels comfortable with his leadership.
To be clear, even if I’m right about the above, this isn’t saying that there wouldn’t have been better ways to handle this. Also, I want to flag that I don’t know what the board members were actually thinking – maybe they did think of this more as coup and less as a “if we put on our board members hats and try to serve our role as well as possible, what should we do?.” In that case, I would disapprove. I don’t know which one applies.
“OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.”
Kara Swisher also tweeted:
“More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.”
“The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: [Sam will] have a new company up by Monday.”
Apparently Microsoft was also blindsided by this and didn’t find out until moments before the announcement.
“You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAl builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
https://twitter.com/AISafetyMemes/status/1725712642117898654
Not sure how important this is: Judging from the behavior of Satya Nadella during OpenAI’s dev day 12 days ago, Microsoft quite likely didn’t see that coming at that moment.
Thought this was a good article on Microsoft’s power: https://archive.li/soZMQ
It seems like the board did not fire Sam Altman for safety reasons, but instead for other reasons instead. Utterly confusing, and IMO demolishes my previous theory, though a lot of other theories also lost out.
Sources below, with their archive versions included:
https://twitter.com/norabelrose/status/1726635769958478244
https://twitter.com/eshear/status/1726526112019382275
https://archive.is/dXRgA
https://archive.is/FhHUv
This is mere speculation, but another group I’m on posited this might be part of it:
Sam Altman’s sister, Annie Altman, claims Sam has severely abused her
This doesn’t seem impossible given the timing, but I’d still be very surprised if this was what the board’s decision was about. (I’m especially skeptical that it would be exclusively about this.) For one thing, the board announcement uses the wording “hindering [the board’s] ability to exercise its responsibilities.” This doesn’t seem like the wording someone would choose if their decision was prompted by investigating events that happened more than twenty years ago and which don’t directly relate to beneficial use of AI or running a company. (Even in the unlikely case where the board decided to open an investigation into abuse allegations and then caught Sam Altman lying about details related to that, it’s not apparent why they would describe these hypothetical lies as “hindering [the board’s] ability to exercise its responsibilities,” as opposed to using wording that’s more just about “lost the board’s trust.”) Besides, I struggle to picture board members starting an investigation solely based on one accusation from when the person in question was still a teenager. I’m not saying that these accusations are for sure unimportant – in fact, I said the opposite on that LW comment thread. It’s just that… Despite the good advice here about how boards should keep a close eye on leadership, I don’t think it’s a board’s role or comparative advantage to focus on investigating stuff like that. Especially once they already have confirmed their standing CEO and in the absence of more direct red flags. (It would maybe be a bit different if this was a CEO selection process and Sam Altman was a new applicant that board members had only little information about.) One option I can see is that, maybe if the board already had other reasons to be concerned, then learning about the accusations could give them further fuel for investigations. Alternatively, though, it seems much more likely to me that this was about other things entirely. (Perhaps something related to publicly announcing that OpenAI “created AGI internally” and then backpedaling it, while also saying that short AI timelines are best for humanity even though an alignment solution is far from in sight?)
Wasn’t that just a throwaway joke on Reddit?
I very much doubt he was fired over the allegations. However, if the allegations are true, it would raise the likelihood that he engaged in other sketchy or unethical behaviour that we don’t know about.
“not consistently candid” seems to be an implication that he was deceptive to the board about something, at least. It could have just been about strategy, or it could have involved personal misbehaviour as well.
Yeah, now that more information has come to light, it seems to be clearly about disagreements about how to pursue the OpenAI mission. I wonder if the board can point to at least one objectively outrageous thing that Altman was deceptive about, or whether it was more subtle stuff that added up but is hard to convey to outsiders. For instance, I could imagine that they got “empty promises” vibes from Altman where he was placating the most safety-concerned voices at OpenAI by saying he’ll take such and such precautions later in the future, but then kept doing things that are at odds with taking safety seriously, until people had enough and felt deceived and like they could no longer trust his assurances. In this scenario, it’s going to be difficult for the board and for Sutskever to convey that their decision wasn’t some overreaction. (FWIW, I think it can be totally justifiable to fire someone over weasel-like assurances about mission alignment that never led to any visible actions – it’s just tricky that there’s always some plausible deniability where the CEO can say “I was going to take action later, like I said; it’s just that you people are insufficiently pragmatic and don’t have experience dealing with investors like Microsoft; and anyway, the tech isn’t risky enough yet and you all are freaking out.”)
It would seem like a bad move to openly say the “not consistently candid” and “hindering responsibilities” thing if there was no objective deception they could point to. Even if they don’t state what happened publicly, the board has to be able to defend it’s actions to it’s employees and to it’s partners at Microsoft.
My impression is that this type of public admonishment is rather rare for the ousting of a CEO, and it would be more typical to talk about a “difference of vision” or something similarly bland. I think either they have a clear cut case against him, or the board has mishandled the situation.
We are at a critical time as we stand; either we have the Board yielding to the plea/threat of the worker or we have inexperienced actors being at the helm of the driving force in AI. What do you think organizations like EA can do in this regard, should we just sit and watch or should we regard the threat as non-existent because to me, having this sort of people managing the AI space is a ticking time bomb
Interesting. The press release defines the board’s governance mission as “ensure that artificial general intelligence benefits all humanity,” and then asserts that Sam hindered that mission.
I suppose one could interpret that as a shift towards greater caution and governance in the name of AI safety, or a shift towards greater speed/open-sourcing if the board views their mission through a lens of accelerationism and accessibility.
Or something entirely different… we’re digging into talmudic nuance here, and all of these are near-wild guesses.
It could be noteworthy that they chose to highlight Mira’s governance experience.
The latter part of the press release (not quoted above, but visible in the original here) also points out that the majority of board members hold no OpenAI equity, which could be a nod towards this being a move that sacrifices profitability for the sake of the mission. Again though, only a guess, and even if true it would still leave open the question of how the board is interpreting the mission.
Not too long an unemployment period of 5 days, but on the other hand, not a bad endorsement.
The reinstatement of Altman as head of OpenAI took place under truly revolutionary circumstances. Reportedly, 650 employees threatened to leave immediately and investors threatened legal action against the ChatGPT creator. Unsurprisingly, Microsoft, the largest investor, owning 49% of the shares and pumping huge amounts of money into the company, had the most at stake. It was the tech giant that first expressed great dissatisfaction with Altman’s dismissal and even offered him the creation of an AI division within Microsoft, should OpenAI’s board of directors nonetheless relent.
Just saw this on hacker news as a response to Sam Altman Exposes the Charade of AI Accountability. The damage for EA’s reputation is hard to estimate but perhaps real.
Here’s a Bloomberg article with a few more details.
https://archive.ph/sv8SH
Wow, that article is seriously dishonest and misleading throughout. What a mess.
Apropos of nothing, I’m reminded of this old update from CEA.
Can someone who downvoted explain why they downvoted?
Seemed not relevant enough to the topic, and too apt to be highly inflammatory, to be worthwhile to bring up.
What’s the lore behind that update? This was before I followed EA community stuff
My understanding, though I’m not sure the board ever publicly confirmed this, was they decided that Larissa was acting on behalf of Leverage Research, and hence contrary to the best interests of CEA, and they wanted to stop the entryism.
IIRC the official reason (or at least the thing that caused stuff to come to a head) was that Larissa and Kerry had been dating for multiple months but had never told the rest of leadership or the board about it.
This isn’t true.
Larissa and I did start dating shortly before we left CEA but we were told repeatedly and in writing that this was not a factor in Larissa’s departure.
We believe we followed CEAs policy around co-workers dating and have never received any indication to the contrary from CEA.
I was told this dozens of times by many different employees. None of them were board members, but they all seemed to agree it was the thing that caused the conflict to escalate.
I think the disagreement here is that we followed the CEA policy and were told explicitly and in writing at the time by the board that our dating had nothing to do with their decision. That doesn’t mean staff weren’t upset.
I don’t know what “nothing to do” means. I do now believe that it had nothing legally to do with the firing, but it still seems like the thing that “brought things to a head”.
So did you do something wrong then, even if that wasn’t why you left? How long did it take you to tell the organisation that you were dating?
No, we didn’t do anything wrong. Like I said, we followed the policy.
People were upset that we were dating but not because there was some coverup or anything. Some folks had strategic disagreements with me and us dating made that a larger problem.
how long did it take you to tell the organisation that you were dating?
It was a short timeline. I don’t remember exactly but we told senior leadership and the board quite soon after we decided to start dating.
Less than a month?
Yes
If you both leaving was performance-related, it’s sort of weird for you both to leave at the same time. Was both of you leaving performance-related? Or did you both leave the same time because you were dating? Can you say more about why you both left at the same time?
I don’t know that this requires further scrutiny—not wanting to continue working at an organisation that fired your girlfriend seems like the default response.
did you declare to the rest of CEA that you were dating as soon as you started dating? If not, how long was the gap?
I do enjoy the secret Leverage spy stories as it makes my life seem more exciting than it is but they don’t ever make me feel very optimistic about EA epistemics.
Thanks for the response; would you mind sharing the reason the board gave for firing you?
Regarding epistemics, is Leverage still operating according to this plan, with samples below:
Uggh I’m sorry. I didn’t mean to bring up conversations about current Leverage in this thread, as it’s very off-topic. I just thought it’d be instructive to include a link for the only other time I remember in recent memory about a board very clearly firing a CEO, when the much more normal thing to do in that context is pretend the CEO resigned, or leaving it ambiguous.
I thought there were interesting parallels, that’s all. Didn’t mean to draw so much heat.
If Holden or other folks in EA blew up OpenAI, that ain’t gonna be good for the movement… fr fr
Is this AI safety related?
No, Sam Altman and the members of the OpenAI board all don’t work at MIRI/SIAI or FHI, so it doesn’t seem to have anything to do with AI safety.
inb4 OpenAI board put their whole bankroll short on Microsoft stock, will sell on Monday for XX billion and build their own chip factory. 😁
Is Helen Toner CIA or just the kind of person who talks to Stare Department employees in secure locations several times a quarter? She has a degree in CIA from CIA university and works at an institute that is have in glove with USFEDGOV.
https://cset.georgetown.edu/staff/helen-toner/
https://en.m.wikipedia.org/wiki/Center_for_Security_and_Emerging_Technology