I love to see people coming to this simple and elegant case in their own way and from their own perspective— this is excellent for spreading the message and helps to keep it grounded. Was very happy to see this on the Forum :)
As for whether Pause is the right policy (I’m a founder of PauseAI and ED of PauseAI US), we can quibble about types of pauses or possible implementations but I think “Pause NOW” is the strongest and clearest message. I think anything about delaying a pause or timing it perfectly is the unrealistic thing that makes it harder to achieve consensus and to have the effect we want, and Carl should know better. I’m still very surprised he said it given how much he seems to get the issue, but I think it comes down to trying to “balance the benefits and the risks”. Imo the best we can do for now is slam the brakes and not drive off the cliff, and we can worry about benefits after.
When we treat international cooperation or a moratorium as unrealistic, we weaken our position and make that more true. So, at least when you go to the bargaining table, if not here, we need to ask for fully what we want without pre-surrendering. “Pause AI!”, not “I know it’s not realistic to pause, but maybe you could tap the brakes?” What’s realistic is to some extent what the public says is realistic.
On my view the OP’s text citing me left out the most important argument from the section they linked: the closer and tighter an AI race is at the international level as the world reaches strong forms of AGI and ASI, the less slack there is for things like alignment. The US and Chinese governments have the power to prohibit their own AI companies from negligently (or willfully) racing to create AI that overthrows them, if they believed that was a serious risk and wanted to prioritize stopping it. That willingness will depend on scientific and political efforts, but even if those succeed enormously, the international cooperation between the US and China will pose additional challenges. The level of conviction in risks governments would need would be much higher than to rein in their own companies without outside competition, and there would be more political challenges.
Absent an agreement with enough backing it to stick, slowdown by the US tightens the international gap in AI and means less slack (and less ability to pause when it counts) and more risk of catastrophe in the transition to AGI and ASI. That’s a serious catastrophe-increasing effect of unilateral early (and ineffectual at reducing risk) pauses. You can support governments having the power to constrain AI companies from negligently destroying them, and international agreements between governments to use those powers in a coordinated fashion (taking steps to assure each other in doing so), while not supporting unilateral pause to make the AI race even tighter.
I think there are some important analogies with nuclear weapons. I am a big fan of international agreements to reduce nuclear arsenals, but I oppose the idea of NATO immediately destroying all its nuclear weapons and then suffering nuclear extortion from Russia and China (which would also still leave the risk of nuclear war between the remaining nuclear states). Unilateral reductions as a gesture of good faith that still leave a deterrent can be great, but that’s much less costly than evening up the AI race (minimal arsenals for deterrence are not that large).
“So, at least when you go to the bargaining table, if not here, we need to ask for fully what we want without pre-surrendering. “Pause AI!”, not “I know it’s not realistic to pause, but maybe you could tap the brakes?” What’s realistic is to some extent what the public says is realistic.”
I would think your full ask should be the international agreement between states, and companies regulated by states in accord with that, not unilateral pause by the US (currently leading by a meaningful margin) until AI competition is neck-and-neck.
And people should consider both the possibilities of ultimate success and of failure with your advocacy, and be wary of intermediate goals that make things much worse if you ultimately fail with global arrangements but make them only slightly more likely to succeed. I think it is certainly possible some kind of inclusive (e.g. including all the P-5) international deal winds up governing and delaying the AGI/ASI transition, but it is also extremely plausible that it doesn’t, and I wouldn’t write off consequences in the latter case.
Absent an agreement with enough backing it to stick, slowdown by the US tightens the international gap in AI and means less slack (and less ability to pause when it counts) and more risk of catastrophe in the transition to AGI and ASI.
I agree this mechanism seems possible, but it seems far from certain to me. Three scenarios where it would be false:
One country pauses, which gives the other country a commanding lead with even more slack than anyone had before.
One country pauses, and the other country, facing reduced incentives for haste, also pauses.
One country pauses, which significantly slows down the other country also, because they were acting as a fast-follower, copying the models and smuggling the chips from the leader.
A intuition-pump I like here is to think about how good it would be if China credibly unilaterally paused, and then see how many of these would also apply to the US.
Sure, these are possible. My view above was about expectations. #1 and #2 are possible, although look less likely to me. There’s some truth to #3, but the net effect is still gap closing, and the slowing tends to be more earlier (when it is less impactful) than later.
Right, those comments were about the big pause letter, which while nominally global in fact only applied at the time to the leading US lab, and even if voluntarily complied with would not affect the PRC’s efforts to catch up in semiconductor technology, nor Chinese labs catching up algorithmically (as they have partially done).
The PauseAI website says “Individual countries can and should implement this measure right now.” Doesn’t that mean that they’re advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren’t agreed upon. So something more like “We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we’re probably doomed, because [insert arguments here]”. But it sounds like you think that messaging is perhaps too complex and it’s beneficial to simplify it to just “Pause NOW”?
(FYI I’m the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That’s what that language refers to. I’m very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn’t have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. “PauseAI NOW” is not just the simplest and best message to coordinate around, it’s also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.
Thanks for the thoughtful feedback Carl, I appreciate it. This is one of my first posts here so I’m unsure of the norms—is it acceptable/preferred that I edit the post to add that point to the bulleted list in that section (and if so do I add a “edited to add” or similar tag) or just leave it to the comments for clarification?
I hope the bulk of the post made it clear that I agree with what you’re saying—a pause is only useful if it’s universal, and so what we need to do first is get universal agreement among the players that matter on why, when, and how to pause.
I love to see people coming to this simple and elegant case in their own way and from their own perspective— this is excellent for spreading the message and helps to keep it grounded. Was very happy to see this on the Forum :)
As for whether Pause is the right policy (I’m a founder of PauseAI and ED of PauseAI US), we can quibble about types of pauses or possible implementations but I think “Pause NOW” is the strongest and clearest message. I think anything about delaying a pause or timing it perfectly is the unrealistic thing that makes it harder to achieve consensus and to have the effect we want, and Carl should know better. I’m still very surprised he said it given how much he seems to get the issue, but I think it comes down to trying to “balance the benefits and the risks”. Imo the best we can do for now is slam the brakes and not drive off the cliff, and we can worry about benefits after.
When we treat international cooperation or a moratorium as unrealistic, we weaken our position and make that more true. So, at least when you go to the bargaining table, if not here, we need to ask for fully what we want without pre-surrendering. “Pause AI!”, not “I know it’s not realistic to pause, but maybe you could tap the brakes?” What’s realistic is to some extent what the public says is realistic.
On my view the OP’s text citing me left out the most important argument from the section they linked: the closer and tighter an AI race is at the international level as the world reaches strong forms of AGI and ASI, the less slack there is for things like alignment. The US and Chinese governments have the power to prohibit their own AI companies from negligently (or willfully) racing to create AI that overthrows them, if they believed that was a serious risk and wanted to prioritize stopping it. That willingness will depend on scientific and political efforts, but even if those succeed enormously, the international cooperation between the US and China will pose additional challenges. The level of conviction in risks governments would need would be much higher than to rein in their own companies without outside competition, and there would be more political challenges.
Absent an agreement with enough backing it to stick, slowdown by the US tightens the international gap in AI and means less slack (and less ability to pause when it counts) and more risk of catastrophe in the transition to AGI and ASI. That’s a serious catastrophe-increasing effect of unilateral early (and ineffectual at reducing risk) pauses. You can support governments having the power to constrain AI companies from negligently destroying them, and international agreements between governments to use those powers in a coordinated fashion (taking steps to assure each other in doing so), while not supporting unilateral pause to make the AI race even tighter.
I think there are some important analogies with nuclear weapons. I am a big fan of international agreements to reduce nuclear arsenals, but I oppose the idea of NATO immediately destroying all its nuclear weapons and then suffering nuclear extortion from Russia and China (which would also still leave the risk of nuclear war between the remaining nuclear states). Unilateral reductions as a gesture of good faith that still leave a deterrent can be great, but that’s much less costly than evening up the AI race (minimal arsenals for deterrence are not that large).
“So, at least when you go to the bargaining table, if not here, we need to ask for fully what we want without pre-surrendering. “Pause AI!”, not “I know it’s not realistic to pause, but maybe you could tap the brakes?” What’s realistic is to some extent what the public says is realistic.”
I would think your full ask should be the international agreement between states, and companies regulated by states in accord with that, not unilateral pause by the US (currently leading by a meaningful margin) until AI competition is neck-and-neck.
And people should consider both the possibilities of ultimate success and of failure with your advocacy, and be wary of intermediate goals that make things much worse if you ultimately fail with global arrangements but make them only slightly more likely to succeed. I think it is certainly possible some kind of inclusive (e.g. including all the P-5) international deal winds up governing and delaying the AGI/ASI transition, but it is also extremely plausible that it doesn’t, and I wouldn’t write off consequences in the latter case.
I agree this mechanism seems possible, but it seems far from certain to me. Three scenarios where it would be false:
One country pauses, which gives the other country a commanding lead with even more slack than anyone had before.
One country pauses, and the other country, facing reduced incentives for haste, also pauses.
One country pauses, which significantly slows down the other country also, because they were acting as a fast-follower, copying the models and smuggling the chips from the leader.
A intuition-pump I like here is to think about how good it would be if China credibly unilaterally paused, and then see how many of these would also apply to the US.
Sure, these are possible. My view above was about expectations. #1 and #2 are possible, although look less likely to me. There’s some truth to #3, but the net effect is still gap closing, and the slowing tends to be more earlier (when it is less impactful) than later.
(Makes much more sense if you were talking about unilateral pauses! The PauseAI pause is international, so that’s just how I think of Pause.)
Right, those comments were about the big pause letter, which while nominally global in fact only applied at the time to the leading US lab, and even if voluntarily complied with would not affect the PRC’s efforts to catch up in semiconductor technology, nor Chinese labs catching up algorithmically (as they have partially done).
Thanks for the comments Holly! Two follow-ups:
The PauseAI website says “Individual countries can and should implement this measure right now.” Doesn’t that mean that they’re advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren’t agreed upon. So something more like “We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we’re probably doomed, because [insert arguments here]”. But it sounds like you think that messaging is perhaps too complex and it’s beneficial to simplify it to just “Pause NOW”?
(FYI I’m the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That’s what that language refers to. I’m very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn’t have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. “PauseAI NOW” is not just the simplest and best message to coordinate around, it’s also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.
Thanks for the thoughtful feedback Carl, I appreciate it. This is one of my first posts here so I’m unsure of the norms—is it acceptable/preferred that I edit the post to add that point to the bulleted list in that section (and if so do I add a “edited to add” or similar tag) or just leave it to the comments for clarification?
I hope the bulk of the post made it clear that I agree with what you’re saying—a pause is only useful if it’s universal, and so what we need to do first is get universal agreement among the players that matter on why, when, and how to pause.