Paul—you wrote that ‘If the world were unified around the priority of minimizing global catastrophic risk, I think that we could reduce risk significantly further by implementing a global, long-lasting, and effectively enforced pause on frontier AI development—including a moratorium on the development and production of some types of computing hardware. The world is not unified around this goal....’
I think that underestimates the current public consensus and concerns about AI risk. The polls I’ve seen suggest widespread public hostility to AGI development, and skepticism about the AI industry’s capacity to manage AI development safely. Indeed, the public sentiment seems much closer to that of AI Safety experts (eg within EA), than it does to the views of AI industry insiders (such as Yann LeCun), or to e/acc people who yearn for ‘the Singularity’.
I’m still digesting the implications of these opinion polls, but I think they should nudge EAs towards a fairly significant updating on our expectations about the role that the public could play in supporting an AI Pause. It’s worth remembering that the public has seen depictions of dangerous AI in novels, movies, and TV series ever since the 1927 movie ‘Metropolis″ (or, arguably, maybe even since the 1818 novel ‘Frankenstein’). Ordinary folks are primed to understand that AI is very risky. They might not understand the details of technical AI alignment, or RSPs, or LLMs, or deep learning. But the political will seems to be there to support an AI Pause.
My worry is that we EAs have spent so many years assuming that the public can’t understand AI risks, that we’re still pushing ahead on technical and policy solutions, because that’s what we’re used to doing. And we assume the political will isn’t there to do anything more significant and binding in reducing X risk. But perhaps the public will really is there.
Paul—you wrote that ‘If the world were unified around the priority of minimizing global catastrophic risk, I think that we could reduce risk significantly further by implementing a global, long-lasting, and effectively enforced pause on frontier AI development—including a moratorium on the development and production of some types of computing hardware. The world is not unified around this goal....’
I think that underestimates the current public consensus and concerns about AI risk. The polls I’ve seen suggest widespread public hostility to AGI development, and skepticism about the AI industry’s capacity to manage AI development safely. Indeed, the public sentiment seems much closer to that of AI Safety experts (eg within EA), than it does to the views of AI industry insiders (such as Yann LeCun), or to e/acc people who yearn for ‘the Singularity’.
I’m still digesting the implications of these opinion polls, but I think they should nudge EAs towards a fairly significant updating on our expectations about the role that the public could play in supporting an AI Pause. It’s worth remembering that the public has seen depictions of dangerous AI in novels, movies, and TV series ever since the 1927 movie ‘Metropolis″ (or, arguably, maybe even since the 1818 novel ‘Frankenstein’). Ordinary folks are primed to understand that AI is very risky. They might not understand the details of technical AI alignment, or RSPs, or LLMs, or deep learning. But the political will seems to be there to support an AI Pause.
My worry is that we EAs have spent so many years assuming that the public can’t understand AI risks, that we’re still pushing ahead on technical and policy solutions, because that’s what we’re used to doing. And we assume the political will isn’t there to do anything more significant and binding in reducing X risk. But perhaps the public will really is there.