From the linked article:
We’re announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors.
...
Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President’s Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020. From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs.
...
Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards—Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters. She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City. Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States.
...
Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world’s leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App. Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders and serves as President of the Metrodora Foundation.
It looks like none of them have a significant EA connection, although Sue Desmond-Hellmann has said some positive things about effective altruism at least.
Impressions:
None of these seem to have the relevant AI governance expertise
Some nonprofit expertise
Mostly corporate expertise
I wonder what’s happening with the OpenPhil board seat
I’m pretty sure that’s gone now. I.e. that the initial $30m for a board seat arrangement wasn’t actually legally binding wrt future members of the board, it was just maintained by who the current members would allow. So now there are no EA aligned board members there is no pressure or obligation to add any.
I could be wrong about this but I’m reasonably confident
My understanding is that since Holden left there’s not been any formal “Open Phil board seat”
Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.
https://thezvi.substack.com/p/openai-the-board-expands
Anyone with thoughts on what went wrong with EA’s involvement in OpenAI? It’s probably too late to apply any lessons to OpenAI itself, but maybe not too late elsewhere (e.g., Anthropic)?
At the risk of sounding, it’s really not clear to me that anything “went wrong”—from my outside perspective, it’s not like there was a clear mess-up on the part of EA’s anywhere here, just a difficult situation managed to the best of people’s abilities.
That doesn’t mean that it’s not worth pondering whether there’s any aspect that had been handled badly, or more broadly what one can take away from this situation (although we should beware to over-update on single notable events). But, not knowing the counterfactuals, and absent a clear picture of what things “going right” would have looked like, it’s not evident that this should be chalked up as a failing on the part of EA.
The thing that stands out to me as clearly seeming to go wrong is the lack of communication from the board during the whole debacle. Given that the final best guess at the reasoning for their decision seems like something could have explained[1], it does seem like an own goal that they didn’t try to do so at the time.
They were getting clear pressure from the OpenAI employees to do this for instance, this was one of the main complaints in the employee letter, and from talking to a couple of OAI employees I’m fairly convinced that this was sincere (i.e. they were just as in the dark as everyone else, and this was at least one of their main frustrations).
I’ve heard a few people make a comparison to other CEO-stepping-down situations, where it’s common for things to be relatively hush hush and “taking time out to spend with their family”. I think this isn’t a like for like comparison, because in those cases it’s usually a mutual agreement between the board and the CEO for them both to save face and preserve the reputation of the company. In the case of a sudden unilateral firing it seems more important to have your reasoning ready to explain publicly (or even privately, to the employees).
It’s possible of course that there are some secret details that explain this behaviour, but I don’t think there’s any reason to be overly charitable in assuming this. If there was some strategic tradeoff that the board members were making it’s hard to see what they were trading off against because they don’t seem to have ended up with anything in the deal[2]. I also don’t find “safety-related secret” explanations that compelling because I don’t see why they couldn’t have said this (that there was a secret, not what it was). Everyone involved was very familiar with the idea that AI safety infohazards might exist so this would have been a comprehensible explanation.
If I put myself in the position of the board members I can much more easily imagine feeling completely out of my depth in the situation that happened and ill-advisedly doubling down on this strategy of keeping quiet. It’s also possible they were getting bad advice to this effect, as lawyers tend to tell you to keep quiet, and there is general advice out there to “not engage with the twitter mob”.
Several minor fibs from Sam, saying different things to different board members to try and manipulate them. This does technically fit with the “not consistently candid” explanation but that was very cryptic without further clarification and examples
To frame this the other way, if they had kept quiet and then been given some lesser advisory position in the company afterwards you could more easily reason that some face-saving dealing had gone on
I wish I could signal boost this comment more. D’Angelo, Toner, and McCauley had a reason which was enough to persuade Ilya to take the drastic move of summarily removing Brockman from the board and then firing Sam, without any foresight and no communication whatsoever to the rest of the company, OpenAI stakeholders, or even Emmett (he threatened to resign unless they told him,[1] though not sure where that left off), which lost them legitimacy both internally and externally, and eventually lost them everything.
I honestly don’t have a plausible reason for it, or an belief they might have had which would help to square this circle, especially since I don’t buy the “AGI has been achieved internally” stuff. I honestly think that your explanation of not realising it was going to go so nuclear,[2] and then just doing whatever the lawyers told them, is what happened. But if so, the lawyers’ strategy of “say nothing, do nothing, just collect evidence and be quiet” was just an absolute disaster both for their own strategy and EA’s reputation and legitimacy as a whole.[3] It honestly just seems like staggering incompetency to me,[4] and the continued silence is just the most perplexing part of the entire saga, and I’m still (obviously) exercised by it.
Toner and McCauley remain as 2 members of the 5 person advisory board for The Centre for the Governance of AI. Any implications are left as an exercise for the reader.
https://www.bloomberg.com/news/articles/2023-11-21/altman-openai-board-open-talks-to-negotiate-his-possible-return
But how could they not realise this???
The dangers of policy set by lawyers reminds me of the Heads of Harvard, UPenn, and MIT having a disaster in front of Congress about the Israel/Palestine/free-speech/genocide hearing, possibly because they were repeating some legally cleared lines about what the could say to limit liability instead of saying “genocide is bad and advocating it is against our code of conduct”
Nothing has really changed my mind here
I think answers to this are highly downstream of object-level positions.
If you think timelines are short and scaled-up versions of current architectures will lead to AGI, then ‘what went wrong’ is contributing to vastly greater chance of extinction.
If you don’t agree with the above, then ‘what went wrong’ is overly dragging EA’s culture and perception to be focused on AI-Safety, and causing great damage to all of EA (even non-AI-Safety parts) when the OpenAI board saga blew up in Toner & McCauleys’ faces.
Lessons are probably downstream of this diagnosis.
My general lesson aligns with Bryan’s recent post—man is EA bad about communicating what it is, and despite the OpenAI fiasco not being an attempted EA-coup motivated by Pascal’s mugging longtermist concerns, it seems so many people have that as a ‘cached explanation’ of what went on. Feels to me like that is a big own goal and was avoidable.
Also on OpenAI, I think it’s bad that people like Joshua Achiam who do good work at OpenAI seem to really dislike EA. That’s a really bad sign—feels like the AI Safety community could have done more not to alienate people like him maybe.
Also worth reading:
this article by the NY Times
OpenAI’s press release](https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai) where they summarize the report of the investigation, which contains very little information. Here’s their conclusion:
Sam Altman seems to have won the battle and gained a very strong grip on OpenAI, but his reputation has taken a large hit.
I recommend adding “Sam Altman” to the title, it can act as a TLDR. The current phrasing has a bit of a “click here to know more” vibe for me (like an ad) (probably unintentionally)
Personally I think the other members are actually the bigger news here, seeing as Sam being added back seemed like a foregone conclusion (or at least, the default outcome, and him not being added back would have been news).
But anyway, my goal was just to link to the post without editorialising too much so that people can discuss it on the forum. For this I think a policy of copying the exact title from the article is good in general.