Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.
I’m highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future.
However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs.
How are you planning to appeal to people like me in your movement?
Yes, very much so. PauseAI US is a coalition of people who want to pause frontier AI training, for whatever reason they may have. This is the great strength of the Pause position— it’s simply the sensible next step when you don’t know what you’re doing playing with a powerful unknown, regardless of what your most salient feared outcome is. The problem is just how much could go wrong with AI (that we can and can’t predict), not only one particular set of risks, and Pause is one of the only general solutions.
Our community includes x-risk motivated people, artists who care about abuse of copyright and losing their jobs, SAG-AFTRA members whose primary issue is digital identity protection and digital provenance, diplomats whose chief concern is equality across the Global North and Global South, climate activists, anti-deepfake activists, and people who don’t want an AI Singularity to take away all meaningful human agency. My primary fear is x-risk, ditto most of the leadership across the PauseAIs, but I’m also very concerned about digital sentience and think that Pause is the only safe next step for their own good. Pause comfortably accommodates the gamut of AI risks.
And the Pause position accommodates this huge set of concerns without conflict. The silly feud between AI ethics and AI x-risk doesn’t make sense through the lens of Pause: both issues would be helped by not making even more powerful models before we know what we’re doing, so they aren’t competing. Similarly, with Pause, there’s no need to choose between near-term and long-term focus.