Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That’s what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don’t know what the ideal policies are but it doesn’t seem like a “pause” with no other asks is the best one.
Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn’t just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
Pause AI’s premise is very “doomy” and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won’t work and what good policies are. The Pause AI movement is very “soldier” mindset and not “scout” mindset.
This is assuming that the alignment/control problems are (a) solvable, and (b) solvable in time. I’m sceptical of (a), let alone (b).
None of the regulations you mention (“model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.”) matter without at least a conditional Pause when red lines are crossed (and arguably we’ve already crosses many previously stated red lines, with no consequences in terms of slowing down or pausing).
This and the following point are addressed by othercommenters.
Hi Marcus, I’m in the mood for a bit of debate, so I’m going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. “Pausing AI development for any meaningful amount of time is incredibly unlikely to occur.” < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn’t a good argument to not work on Safety. Scale and neglectedness matter, as well as tractibility!
”they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.”
- Can you show evidence of this please?
3. “Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious.”
- Samesies—can you provide evidence please?
In fact, this whole point seems pretty unjustified. It seems you’re basically arguing that advocacy doesn’t work? Is that correct?
4. “Pause AI’s premise … only makes sense if you have extremely high AI extinction probabilities”
Can you justify this point please? I think it is interesting but it isn’t really explained.
I don’t think there is a need for me to show the relationship here.
2⁄3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched “pause ai protest” on youtube.
In it, the chant things like “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That’s what doing work looks like.
This seems to be what a typical protest looks like. I’ve seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I’ll let others form their opinions.
Correct, I potentially misremembered. the actual things they definitely say, at least in this video are “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I would submit that I am at the very least directionally correct.
“Demis Hassabis, reckless!” honestly feels to me like a pretty tame protest chant. I did a Google search for “protest” and this was the first result. Signs are things like “one year of genocide funded by UT” which seems both substantially more extreme and less epistemically valid than calling Demis “reckless.”
My sense from your other points is that you just don’t actually want pause AI to accomplish their goals, so it’s kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
Actually, I’m uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their “p(doom)” and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It’s as if both of them aren’t doing any/enough reading of history. In the case of my tribe
I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little.
Last, I was just responding to the prompt “What are some criticisms of PauseAI?”. It’s not exactly the place for a “fair and balanced view” but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
What is a reasonable p(doom|ASI) to have to not be concluding that pausing AI is a good idea? Or—what % chance of death are you personally willing to accept for a shot at immortality/utopia? Would it be the same if it was framed in terms of a game of Russian Roulette?
I can try to answer 3 for Marcus. Imagine that AI policy is a soccer game for professional soccer players. You’ve put in a lot of practice, know the rules, and know how to work well with your teammates. You’re scoring some goals.
Then someone from an interim/pick-up game league who is just learning to play soccer comes along and tried to be on the team, or—in this case is not even aware of the team? If we let them on the team, not only do we look bad to the other team, but since policy is a team sport, they drive our overall impact down because it’s kind of dead weight that we now have to try to guard against for things they do that they think are helpful but are not, depleting energy and resources better spent on getting goals.
I think in terms of this analogy, there are no midfielders, let alone strikers, on the pitch amongst the professionals. No one is even really trying to score goals. Maybe they are going for corners at best. Many are even colluding with the other team and their supporters to make money throwing thematch.
Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That’s what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don’t know what the ideal policies are but it doesn’t seem like a “pause” with no other asks is the best one.
Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn’t just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
Pause AI’s premise is very “doomy” and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won’t work and what good policies are. The Pause AI movement is very “soldier” mindset and not “scout” mindset.
This is assuming that the alignment/control problems are (a) solvable, and (b) solvable in time. I’m sceptical of (a), let alone (b).
None of the regulations you mention (“model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.”) matter without at least a conditional Pause when red lines are crossed (and arguably we’ve already crosses many previously stated red lines, with no consequences in terms of slowing down or pausing).
This and the following point are addressed by other commenters.
See above.
A lot of us have done our scouting (and continue to do so). The time for action is now (or never). Also, I don’t think your p(doom) has to be super high to conclude that the best course of action is pausing.
Hi Marcus, I’m in the mood for a bit of debate, so I’m going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. “Pausing AI development for any meaningful amount of time is incredibly unlikely to occur.” < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn’t a good argument to not work on Safety. Scale and neglectedness matter, as well as tractibility!
”they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.”
- Can you show evidence of this please?
3. “Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious.”
- Samesies—can you provide evidence please?
In fact, this whole point seems pretty unjustified. It seems you’re basically arguing that advocacy doesn’t work? Is that correct?
4. “Pause AI’s premise … only makes sense if you have extremely high AI extinction probabilities”
Can you justify this point please? I think it is interesting but it isn’t really explained.
I don’t think there is a need for me to show the relationship here.
2⁄3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched “pause ai protest” on youtube. In it, the chant things like “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That’s what doing work looks like.
This seems to be what a typical protest looks like. I’ve seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I’ll let others form their opinions.
The provided source doesn’t show PauseAI affiliated people calling Sam Altman and Dario Amodei evil.
Correct, I potentially misremembered. the actual things they definitely say, at least in this video are “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I would submit that I am at the very least directionally correct.
“Demis Hassabis, reckless!” honestly feels to me like a pretty tame protest chant. I did a Google search for “protest” and this was the first result. Signs are things like “one year of genocide funded by UT” which seems both substantially more extreme and less epistemically valid than calling Demis “reckless.”
My sense from your other points is that you just don’t actually want pause AI to accomplish their goals, so it’s kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
Actually, I’m uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their “p(doom)” and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It’s as if both of them aren’t doing any/enough reading of history. In the case of my tribe
I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little.
Last, I was just responding to the prompt “What are some criticisms of PauseAI?”. It’s not exactly the place for a “fair and balanced view” but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
What is a reasonable p(doom|ASI) to have to not be concluding that pausing AI is a good idea? Or—what % chance of death are you personally willing to accept for a shot at immortality/utopia? Would it be the same if it was framed in terms of a game of Russian Roulette?
Strong +1 on #3
I can try to answer 3 for Marcus. Imagine that AI policy is a soccer game for professional soccer players. You’ve put in a lot of practice, know the rules, and know how to work well with your teammates. You’re scoring some goals.
Then someone from an interim/pick-up game league who is just learning to play soccer comes along and tried to be on the team, or—in this case is not even aware of the team? If we let them on the team, not only do we look bad to the other team, but since policy is a team sport, they drive our overall impact down because it’s kind of dead weight that we now have to try to guard against for things they do that they think are helpful but are not, depleting energy and resources better spent on getting goals.
I think in terms of this analogy, there are no midfielders, let alone strikers, on the pitch amongst the professionals. No one is even really trying to score goals. Maybe they are going for corners at best. Many are even colluding with the other team and their supporters to make money throwing the match.
That’s just completely false. Sorry I can’t say more.