Thanks for the comments Holly! Two follow-ups:
The PauseAI website says “Individual countries can and should implement this measure right now.” Doesn’t that mean that they’re advocating for unilateral pausing, regardless of other actors choices, even if the high level/ideal goal is a global pause?
If all of the important decision makers (globally) agreed on the premise that powerful AI/AGI/ASI is too risky, then I think there would still be a discussion around aligning on how close we are, how close we should be when we pause, how to enforce a pause, and when it would be safe to un-pause. But to even get to that point, you need to convince those people of the premise, so it seems pre-emptive to me to focus the messaging on the pause aspect if the underlying reasons for the pause aren’t agreed upon. So something more like “We should pause global AI development at some point/soon/now, but only if everyone stops at once, because if we get AGI right now we’re probably doomed, because [insert arguments here]”. But it sounds like you think that messaging is perhaps too complex and it’s beneficial to simplify it to just “Pause NOW”?
I agree that it seems like a valuable framing, thanks Matthew.