GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else.
Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.
Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it’s hard to be sure. GPT-5 could plausibly be devious enough to circumvent all of our black-box testing. Or it may be that it’s too late as soon as the model has been trained. These are small, but real possibilities and it’s a significant milestone of failure that we are now taking these kinds of gambles.
How do we do better for GPT-6?
Governance efforts are mostly focussed on relatively modest goals. Few people are directly aiming at the question: how do we stop GPT-6 from being created at all? It’s difficult to imagine a world where governments actually prevent Microsoft from building a $100 billion AI training data center by 2028.
In fact, OpenAI apparently fears governance so little that they just went and told the UK government that they won’t give it access to GPT-5 for pre-deployment testing [Edit − 17 May 2024: I now think this is probably false]. And the number of safety focussed researchers employed by OpenAI is droppingrapidly.
Hopefully there will be more robust technical solutions for alignment available by the time GPT-6 training begins. But few alignment researchers actually expect this, so we need a backup plan.
Plan B: Mass protests against AI
In many ways AI is an easy thing to protest against. Climate protesters are asking to completely reform the energy system, even if it decimates the economy. Israel / Palestine protesters are trying to sway foreign policies on an issue where everyone already holds deeply entrenched views. Social justice protesters want to change people’s attitudes and upend the social system.
AI protesters are just asking to ban a technology that doesn’t exist yet. About 0% of the population deeply cares that future AI systems are built. Most people support pausing AI development. It doesn’t feel like we’re asking normal people to sacrifice anything. They may in fact be paying a large opportunity cost on the potential benefits of AI, but that’s not something many people will get worked up about. Policy-makers, CEOs and other key decision makers that governance solutions have to persuade are some of the only groups that are highly motivated to let AI development continue.
No innovation required
Protests are the most unoriginal way to prevent an AI catastrophe—we don’t have to do anything new. Previous successful protesters have made detailed instructions for how to build a protest movement.
This is the biggest advantage of protests compared to other solutions—it requires no new ideas (unlike technical alignment) and no one’s permission (unlike governance solutions). A sufficiently large number of people taking to the streets forces politicians to act. A sufficiently large and well organized special interest group can control an issue:
I walked into my office while this was going on and found a sugar lobbyist hanging around, trying to stay close to the action. I felt like being a smart-ass so I made some wise-crack about the sugar industry raping the taxpayers. Without another word, I walked into my private office and shut the door. I had no real plan to go after the sugar people. I was just screwing with the guy.
My phone did not stop ringing for the next five weeks….I had no idea how many people in my district were connected to the sugar industry. People were calling all day, telling me they made pumps or plugs or boxes or some other such part used in sugar production and I was threatening their job. Mayors called to tell me about employers their towns depended on who would be hurt by a sugar downturn. It was the most organized effort I had ever seen.
And that’s why you don’t fuck with sugar.
The discomfort of doing something weird
If we are correct about the risk of AI, history will look kindly upon us (assuming we survive). Already people basically know about AI x-risk and understand that it is not a ridiculous conspiracy theory. But for now protesting about AI is kind of odd. This doesn’t have to be a bad thing—PauseAI protests are a great way to meet interesting, unusual people. Talking about PauseAI is a conversation starter because it’s such a surprising thing to do.
When AI starts to have a large impact on the economy, it will naturally move up the priority list of the general population. But people react too late to exponentials. If AI continues to improve at the current rate, the popular reaction may come too late to avoid the danger. PauseAI’s aim is to bring that reaction forward.
Some AI researchers think that they should not go to protests because it is not their comparative advantage. But this is wrong, the key skill required is the ability to do something weird—to take ideas seriously and to actually try to fix important problems. The protests are currently so small that the marginal impact of an extra person showing up for a couple of hours once every few months is very large.
Preparing for the moment
I think a lot about this post from just after ChatGPT came out, asking why the alignment community wasn’t more prepared to seize the moment when everyone suddenly noticed that AI was getting good. I think this is a good question and one of the reasons is that most alignment researchers did not see it coming.
There will be another moment like that, when people realize that AI is coming for their job imminently and that AI is an important issue affecting their lives. We need to be prepared for that opportunity and the small movement that PauseAI builds now will be the foundation which bootstraps this larger movement in the future.
To judge the value of AI protests by the current, small protests would be to judge the impact of AI by the current language models (a mistake which most of the world appears to be making). We need to build the mass movement. We need to become the Sugar Lobby.
PauseAI’s next protest is on Monday 13 May, in 8 cities around the world.
Why I’m doing PauseAI
GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else.
Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.
Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it’s hard to be sure. GPT-5 could plausibly be devious enough to circumvent all of our black-box testing. Or it may be that it’s too late as soon as the model has been trained. These are small, but real possibilities and it’s a significant milestone of failure that we are now taking these kinds of gambles.
How do we do better for GPT-6?
Governance efforts are mostly focussed on relatively modest goals. Few people are directly aiming at the question: how do we stop GPT-6 from being created at all? It’s difficult to imagine a world where governments actually prevent Microsoft from building a $100 billion AI training data center by 2028.
In fact, OpenAI apparently fears governance so little that they just went and told the UK government that theywon’t give it access to GPT-5for pre-deployment testing[Edit − 17 May 2024: I now think this is probably false]. And the number of safety focussed researchers employed by OpenAI is dropping rapidly.Hopefully there will be more robust technical solutions for alignment available by the time GPT-6 training begins. But few alignment researchers actually expect this, so we need a backup plan.
Plan B: Mass protests against AI
In many ways AI is an easy thing to protest against. Climate protesters are asking to completely reform the energy system, even if it decimates the economy. Israel / Palestine protesters are trying to sway foreign policies on an issue where everyone already holds deeply entrenched views. Social justice protesters want to change people’s attitudes and upend the social system.
AI protesters are just asking to ban a technology that doesn’t exist yet. About 0% of the population deeply cares that future AI systems are built. Most people support pausing AI development. It doesn’t feel like we’re asking normal people to sacrifice anything. They may in fact be paying a large opportunity cost on the potential benefits of AI, but that’s not something many people will get worked up about. Policy-makers, CEOs and other key decision makers that governance solutions have to persuade are some of the only groups that are highly motivated to let AI development continue.
No innovation required
Protests are the most unoriginal way to prevent an AI catastrophe—we don’t have to do anything new. Previous successful protesters have made detailed instructions for how to build a protest movement.
This is the biggest advantage of protests compared to other solutions—it requires no new ideas (unlike technical alignment) and no one’s permission (unlike governance solutions). A sufficiently large number of people taking to the streets forces politicians to act. A sufficiently large and well organized special interest group can control an issue:
The discomfort of doing something weird
If we are correct about the risk of AI, history will look kindly upon us (assuming we survive). Already people basically know about AI x-risk and understand that it is not a ridiculous conspiracy theory. But for now protesting about AI is kind of odd. This doesn’t have to be a bad thing—PauseAI protests are a great way to meet interesting, unusual people. Talking about PauseAI is a conversation starter because it’s such a surprising thing to do.
When AI starts to have a large impact on the economy, it will naturally move up the priority list of the general population. But people react too late to exponentials. If AI continues to improve at the current rate, the popular reaction may come too late to avoid the danger. PauseAI’s aim is to bring that reaction forward.
Some AI researchers think that they should not go to protests because it is not their comparative advantage. But this is wrong, the key skill required is the ability to do something weird—to take ideas seriously and to actually try to fix important problems. The protests are currently so small that the marginal impact of an extra person showing up for a couple of hours once every few months is very large.
Preparing for the moment
I think a lot about this post from just after ChatGPT came out, asking why the alignment community wasn’t more prepared to seize the moment when everyone suddenly noticed that AI was getting good. I think this is a good question and one of the reasons is that most alignment researchers did not see it coming.
There will be another moment like that, when people realize that AI is coming for their job imminently and that AI is an important issue affecting their lives. We need to be prepared for that opportunity and the small movement that PauseAI builds now will be the foundation which bootstraps this larger movement in the future.
To judge the value of AI protests by the current, small protests would be to judge the impact of AI by the current language models (a mistake which most of the world appears to be making). We need to build the mass movement. We need to become the Sugar Lobby.
PauseAI’s next protest is on Monday 13 May, in 8 cities around the world.