Right, those comments were about the big pause letter, which while nominally global in fact only applied at the time to the leading US lab, and even if voluntarily complied with would not affect the PRC’s efforts to catch up in semiconductor technology, nor Chinese labs catching up algorithmically (as they have partially done).
The PauseAI website says “Individual countries can and should implement this measure right now.” Doesn’t that mean that they’re advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren’t agreed upon. So something more like “We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we’re probably doomed, because [insert arguments here]”. But it sounds like you think that messaging is perhaps too complex and it’s beneficial to simplify it to just “Pause NOW”?
(FYI I’m the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That’s what that language refers to. I’m very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn’t have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. “PauseAI NOW” is not just the simplest and best message to coordinate around, it’s also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.
(Makes much more sense if you were talking about unilateral pauses! The PauseAI pause is international, so that’s just how I think of Pause.)
Right, those comments were about the big pause letter, which while nominally global in fact only applied at the time to the leading US lab, and even if voluntarily complied with would not affect the PRC’s efforts to catch up in semiconductor technology, nor Chinese labs catching up algorithmically (as they have partially done).
Thanks for the comments Holly! Two follow-ups:
The PauseAI website says “Individual countries can and should implement this measure right now.” Doesn’t that mean that they’re advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren’t agreed upon. So something more like “We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we’re probably doomed, because [insert arguments here]”. But it sounds like you think that messaging is perhaps too complex and it’s beneficial to simplify it to just “Pause NOW”?
(FYI I’m the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That’s what that language refers to. I’m very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn’t have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. “PauseAI NOW” is not just the simplest and best message to coordinate around, it’s also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.