Yeah, I asked my editor at TIME adding an update. Will edit this piece as well.
Garrison
Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit
Anthropic is Quietly Backpedalling on its Safety Commitments
What OpenAI Told California’s Attorney General
Four Predictions About OpenAI’s Plans To Retain Nonprofit Control
OpenAI Alums, Nobel Laureates Urge Regulators to Save Company’s Nonprofit Structure
A deep research response that doesn’t discuss trump 2 at all is not very useful and could even mislead someone not currently paying attention.
Inside OpenAI’s Controversial Plan to Abandon its Nonprofit Roots
Top OpenAI Catastrophic Risk Official Steps Down Abruptly
I’m hiring a Research Assistant for a nonfiction book on AI!
What the Headlines Miss About the Latest Decision in the Musk vs. OpenAI Lawsuit
DeepSeek Made it Even Harder for US AI Companies to Ever Reach Profitability
Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?
Thanks for writing this, but apparently the waiver is not totally effective (I have this on good authority, but can’t really say more right now). See this paragraph from the NYT article: “The waiver, announced by Secretary of State Marco Rubio, seemed to allow for the distribution of H.I.V. medications, but whether the waiver extended to preventive drugs or other services offered by the program, the President’s Emergency Plan for AIDS Relief, was not immediately clear.”
Is AI Hitting a Wall or Moving Faster Than Ever?
We are in a New Paradigm of AI Progress—OpenAI’s o3 model makes huge gains on the toughest AI benchmarks in the world
Thanks Sarah!
Bengio and Hinton are the two most-cited researchers alive. Ilya Sutskever is the 3rd most cited AI researcher, and though he’s not on that paper, the superalignment intro blog post from OpenAI says this, “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” LeCun is probably the top AI researcher who’s not worried about controlling a superintelligence (4th in total citations after Sutskever).
This is obviously a semantics disagreement, but I stand by the original claim. Note that I’m not saying that all the top AI researchers are worried about x-risk.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it’s happened at least once.
I largely agree with this and alluded to this possibility here:
If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore.
I might write a separate piece on the best evidence for the hype argument, which OpenAI I think has been the biggest winner of. My guess is that Altman actually did believe what he was saying about AI risk back in 2015. Superintelligence came out the year before, and it’s not a surprising view for him to have given what else we know about him.
I’d also guess that Altman and Elon are two of the people most associated with the x-risk story, which has been the biggest driver of skepticism about it.
There’s also been more recent evidence of him ditching x-risk fears now that it seems convenient. From a recent Fox News interview:
Interviewer: “A lot of people who don’t understand AI, and I would put myself in that category, have got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you’re no longer in charge?”
Altman: “It doesn’t seem to me to be where things are heading…is it conscious or not will not be the right question, it will be how complex of a task can it do on its own?”
Interviewer: “What about when the tool gets smarter than we are? Or the tool decides to take over?”
Altman: “I think tools in many senses are already smarter than we are. I think that the internet is smarter than you or I, the internet knows a lot of things. In fact, society itself is vastly smarter and more capable than any one person. I think we’re already good at working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person and as long as we have a reasonably level playing field where one person or one company has vastly more power than anybody else, I think we know how to deal with that.”
Yeah I think this is a more significant walkback, and discussed it here: https://x.com/GarrisonLovely/status/1926095320997368319?t=vfuPigtomkOn5qc9Z8jCmQ&s=19
Good find.