If you enjoy this, please consider subscribing to my Substack.
Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?
The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.)
Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes.
Sam Altman, circa February 2023, agrees that a slower takeoff would be better. In an OpenAI blog post called “Planning for AGI and beyond,” he argues that “a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.”
So why does rushing to AGI help? Altman writes that “shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang.”
Let’s set aside the first claim, which is far from obvious to me.
Computational resources, or compute, is one of the key inputs into training AI models. Altman is basically arguing that the longer it takes to get to AGI, the cheaper and more abundant the compute, which can then be plowed back into improving or scaling up the model.
The amount of compute used to train AI models has increased roughly one-hundred-millionfold since 2010. Compute supply has not kept pace with demand, driving up prices and rewarding the companies that have near-monopolies on chip design and manufacturing.
Last May, Elon Musk said that “GPUs at this point are considerably harder to get than drugs” (and he would know). One startup CEO said “It’s like toilet paper during the pandemic.”
Perhaps no one has benefited more from the deep learning revolution than the 31-year-old GPU designer Nvidia. GPUs, chips originally designed to process 3D video game graphics, were discovered to be the best hardware for training deep learning models. Nvidia, once little-known outside of PC gaming circles, reportedly accounts for 88 percent of the GPU market and has ridden the wave of AI investment. Since OpenAI’s founding in December 2015, Nvidia’s valuation has risen more than 9,940 percent, breaking $1 trillion last summer. CEO and cofounder Jensen Huang was worth $5 billion in 2020. Now it’s $64 billion.
If training a human-level AI system requires an unprecedented amount of computing power, close to economic and technological limits, as seems likely, and additional compute is needed to increase the scale or capabilities of the system, then your takeoff speed may be rate-limited by the availability of this key input. This kind of reasoning is probably why Altman thinks a smaller compute overhang will result in a slower takeoff.
Given all this, many in the AI safety community think that increasing the supply of compute will increase existential risk from AI, by both shortening timelines AND increasing takeoff speed — reducing the time we have to work on technical safety and AI governance and making loss of control more likely.
So why is Sam Altman reportedly trying to raise trillions of dollars to massively increase the supply of compute?
Last night, the Wall Street Journalreported that Altman was in talks with the UAE and other investors to raise up to $7 trillion to build more AI chips.
I’m going to boldly predict that Sam Altman will not raise $7 trillion to build more AI chips. But even one percent of that total would nearly double the amount of money spent on semiconductor manufacturing equipment last year.
Perhaps most importantly, Altman’s plan seems to fly in the face of the arguments he made not even one year ago. Increasing the supply of compute is probably the purest form of boosting AI capabilities and would increase the compute overhang that he claimed to worry about.
The AI safety community sometimes divides AI research into capabilities and safety, but some researchers push back on this dichotomy. A friend of mine who works as a machine learning academic once wrote to me that “in some sense, almost all [AI] researchers are safety researchers because the goal is to try to understand how things work.”
Altman makes a similar point in the blog post:
Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
There are good reasons to doubt the numbers reported above (mostly because they’re absurdly, unprecedentedly big). But regardless of its feasibility, this effort to massively expand the supply of compute is hard to square with the above argument. Making compute cheaper speeds things up without any necessary increase in understanding.
Following November’s board drama, early reporting emerged about Altman’s Middle East chip plans. It’s worth noting that Helen Toner and Tasha McCauley, two of the (now ex-) board members who voted to fire Altman, reviewed drafts of the February 2023 blog post. While I don’t think there was any single smoking gun that prompted the board to fire him, I’d be surprised if these plans didn’t increase tensions.
OpenAI deserves credit for publishing blog posts like “Planning for AGI and beyond.” Given the stakes of what they’re trying to do, it’s important to look at how OpenAI publicly reasons about these issues (of course, corporate blogs should be taken with a grain of salt and supplemented with independent reporting). And when the actions of company leaders seem to contradict these documents, it’s worth calling that out.
If Sam Altman has changed his mind about compute overhangs, it’d be great to hear about it from him.
Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy
Link post
If you enjoy this, please consider subscribing to my Substack.
Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?
The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.)
Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes.
Sam Altman, circa February 2023, agrees that a slower takeoff would be better. In an OpenAI blog post called “Planning for AGI and beyond,” he argues that “a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.”
So why does rushing to AGI help? Altman writes that “shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang.”
Let’s set aside the first claim, which is far from obvious to me.
Computational resources, or compute, is one of the key inputs into training AI models. Altman is basically arguing that the longer it takes to get to AGI, the cheaper and more abundant the compute, which can then be plowed back into improving or scaling up the model.
The amount of compute used to train AI models has increased roughly one-hundred-millionfold since 2010. Compute supply has not kept pace with demand, driving up prices and rewarding the companies that have near-monopolies on chip design and manufacturing.
Last May, Elon Musk said that “GPUs at this point are considerably harder to get than drugs” (and he would know). One startup CEO said “It’s like toilet paper during the pandemic.”
Perhaps no one has benefited more from the deep learning revolution than the 31-year-old GPU designer Nvidia. GPUs, chips originally designed to process 3D video game graphics, were discovered to be the best hardware for training deep learning models. Nvidia, once little-known outside of PC gaming circles, reportedly accounts for 88 percent of the GPU market and has ridden the wave of AI investment. Since OpenAI’s founding in December 2015, Nvidia’s valuation has risen more than 9,940 percent, breaking $1 trillion last summer. CEO and cofounder Jensen Huang was worth $5 billion in 2020. Now it’s $64 billion.
If training a human-level AI system requires an unprecedented amount of computing power, close to economic and technological limits, as seems likely, and additional compute is needed to increase the scale or capabilities of the system, then your takeoff speed may be rate-limited by the availability of this key input. This kind of reasoning is probably why Altman thinks a smaller compute overhang will result in a slower takeoff.
Given all this, many in the AI safety community think that increasing the supply of compute will increase existential risk from AI, by both shortening timelines AND increasing takeoff speed — reducing the time we have to work on technical safety and AI governance and making loss of control more likely.
So why is Sam Altman reportedly trying to raise trillions of dollars to massively increase the supply of compute?
Last night, the Wall Street Journal reported that Altman was in talks with the UAE and other investors to raise up to $7 trillion to build more AI chips.
I’m going to boldly predict that Sam Altman will not raise $7 trillion to build more AI chips. But even one percent of that total would nearly double the amount of money spent on semiconductor manufacturing equipment last year.
Perhaps most importantly, Altman’s plan seems to fly in the face of the arguments he made not even one year ago. Increasing the supply of compute is probably the purest form of boosting AI capabilities and would increase the compute overhang that he claimed to worry about.
The AI safety community sometimes divides AI research into capabilities and safety, but some researchers push back on this dichotomy. A friend of mine who works as a machine learning academic once wrote to me that “in some sense, almost all [AI] researchers are safety researchers because the goal is to try to understand how things work.”
Altman makes a similar point in the blog post:
There are good reasons to doubt the numbers reported above (mostly because they’re absurdly, unprecedentedly big). But regardless of its feasibility, this effort to massively expand the supply of compute is hard to square with the above argument. Making compute cheaper speeds things up without any necessary increase in understanding.
Following November’s board drama, early reporting emerged about Altman’s Middle East chip plans. It’s worth noting that Helen Toner and Tasha McCauley, two of the (now ex-) board members who voted to fire Altman, reviewed drafts of the February 2023 blog post. While I don’t think there was any single smoking gun that prompted the board to fire him, I’d be surprised if these plans didn’t increase tensions.
OpenAI deserves credit for publishing blog posts like “Planning for AGI and beyond.” Given the stakes of what they’re trying to do, it’s important to look at how OpenAI publicly reasons about these issues (of course, corporate blogs should be taken with a grain of salt and supplemented with independent reporting). And when the actions of company leaders seem to contradict these documents, it’s worth calling that out.
If Sam Altman has changed his mind about compute overhangs, it’d be great to hear about it from him.