[Edit: I no longer feel confident about this comment; see thread right below]
Hm, I don’t think Altman looks good here either.
We have to wait and see how public or the media will react, but to me, this looks like it casts doubts on some things he said previously about his specific motivations for building AGI. It’s hard to square that working under Microsoft’s leadership (and their need to compete with other companies like Alphabet) is a good environment for making AGI breakthroughs and thinking that it’ll likely go really well.
Although, maybe he’s planning to only build specific apps for Microsoft rather than intends to build AGI there? That would seem like an atypical reduction of ambition/scope to me. Or maybe the plan is “amass more money and talent and then go back to OpenAI if possible, or otherwise start a new AGI thing with more independence from profit-driven structure. That would be more more understandable, but also feels like he’d be being very agentic about this goal in a way that’s scary and like I have to trust this one person’s judgment about pulling the brakes when it becomes necessary, even though there’s now evidence that many people think he’s not been cautious enough already recently.
Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn’t necessarily look that bad here. I’m sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn’t have been easy to find a solution that isn’t the company falling apart over disagreements and stuff.
Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam’s ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.
I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.
Either he wasn’t behind the push, or he was but subsequently decided it was a huge mistake.
I think the point that Toner & McCauley are conflicted because of OpenPhil/Holden’s connections to Anthropic is a pretty weak argument. But the facts are all verified & pretty basic.
A number of things stand out:
lots of turnover
some turnover for unclear reasons
Adam D’Angelo has a clear conflict of interest
I’m also very curious if anyone knows more how McCauley came on the board? And generally more information about her. I hadn’t heard of her before and she’s apparently an important player now (also in EA as EV UK board member).
Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
My guess is that the letter is largely a bluff. I don’t think these people want to work for Microsoft. I’m surprised Altman decided that was his best move vs starting his own company. Perhaps this implies that starting from scratch is not as easy as we think. Microsoft has the license to most (all?) of OpenAIs tech so they would be able to hit the ground running.
[Edit: I no longer feel confident about this comment; see thread right below]
Hm, I don’t think Altman looks good here either.
We have to wait and see how public or the media will react, but to me, this looks like it casts doubts on some things he said previously about his specific motivations for building AGI. It’s hard to square that working under Microsoft’s leadership (and their need to compete with other companies like Alphabet) is a good environment for making AGI breakthroughs and thinking that it’ll likely go really well.
Although, maybe he’s planning to only build specific apps for Microsoft rather than intends to build AGI there? That would seem like an atypical reduction of ambition/scope to me. Or maybe the plan is “amass more money and talent and then go back to OpenAI if possible, or otherwise start a new AGI thing with more independence from profit-driven structure. That would be more more understandable, but also feels like he’d be being very agentic about this goal in a way that’s scary and like I have to trust this one person’s judgment about pulling the brakes when it becomes necessary, even though there’s now evidence that many people think he’s not been cautious enough already recently.
I guess we have to wait and see.
Perhaps some of his motivation was to keep OpenAI from imploding?
Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn’t necessarily look that bad here. I’m sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn’t have been easy to find a solution that isn’t the company falling apart over disagreements and stuff.
One thing I hadn’t realised is that Ilya Sutskever signed this open letter as well (and he’s on the board!).
Oh yes, that is weird. The impression I had was that Ilya might even have been behind Sam’s ousting (based on rumours from the internet). I also understood that sacking Sam need 4 out of 6 board members, and since two of the board members were Sam A and Greg B, that meant everyone else had to have voted for him to leave, including Ilya. Most confusing.
Sutskever appears to have regrets:
https://twitter.com/ilyasut/status/1726590052392956028
How much credibility dose he still have left by backtracking?
It’s bizarre isn’t it
Very much hoping the board makes public some of the reasons behind the decision.
His recent twitter post:
Either he wasn’t behind the push, or he was but subsequently decided it was a huge mistake.
Bizarrely, the OpenAI board proposed a merger with Anthropic.
Well, maybe. https://manifold.markets/DanielFilan/on-feb-1-2024-will-i-believe-that-o?r=RGFuaWVsRmlsYW4
There’s a decent history of the board changes in OpenAI here: https://loeber.substack.com/p/a-timeline-of-the-openai-board
I think the point that Toner & McCauley are conflicted because of OpenPhil/Holden’s connections to Anthropic is a pretty weak argument. But the facts are all verified & pretty basic.
A number of things stand out:
lots of turnover
some turnover for unclear reasons
Adam D’Angelo has a clear conflict of interest
I’m also very curious if anyone knows more how McCauley came on the board? And generally more information about her. I hadn’t heard of her before and she’s apparently an important player now (also in EA as EV UK board member).
Is there still anything EA community can do regarding AGI safety if full scale armed race towards AGI is coming soon with OpenAI almost surely been absorbed by Microsoft?
Personally, I still think there is a lot of uncertainty around how governments will act. There are at least some promising signs (e.g., UK AI Safety Summit) that governments could intervene to end or substantially limit the race toward AGI. Relatedly, I think there’s a lot to be done in terms of communicating AI risks to the public & policymakers, drafting concrete policy proposals, and forming coalitions to get meaningful regulation through.
Some folks also have hope that internal governance (lab governance) could still be useful. I am not as optimistic here, but I don’t want to rule it out entirely.
There’s also some chance that we end up getting more concrete demonstrations of risks. I do not think we should wait for these, and I think there’s a sizable chance we do not get them in time, but I think “have good plans ready to go in case we get a sudden uptick in political will & global understanding of AI risks” is still important.
I think that trying to get safe concrete demonstrations of risk by doing research seems well worth pursuing (I don’t think you were saying it’s not).
Too soon to tell I think. Probably better to wait for the dust to settle.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
My guess is that the letter is largely a bluff. I don’t think these people want to work for Microsoft. I’m surprised Altman decided that was his best move vs starting his own company. Perhaps this implies that starting from scratch is not as easy as we think. Microsoft has the license to most (all?) of OpenAIs tech so they would be able to hit the ground running.
From OpenPhil’s $30m to firing Sam, EA helped to create and grow one of the most formidable AI research teams, then handed it over to Clippy!
I think this is an overly reductive view of the situation to be honest