I think the factor missing here is the matter of when pushing for a pause is appropriate.
Like, imagine a (imo likely) scenario where a massive campaign gets off, with a lot of publicity behind it, to try and prevent GPT-5 from being released on existential risk grounds. It fails, and GPT-5 is released anyway , and literally nothing majorly bad happens. And then the same thing happens for gpt-6 and gpt-7.
In this scenario, the idea of pausing AI could easily become a laughing stock. Then when an actually dangerous AI comes out, the idea of pausing is still discredited, and you’re missing a tool when you really actually need it.
Even if I believed the risk of overall doom was 5% (way too high imo), I wouldn’t support the pause movement now, I’d want to wait on advocating a pause until there was a significant chance of imminent danger.
I think the factor missing here is the matter of when pushing for a pause is appropriate.
Like, imagine a (imo likely) scenario where a massive campaign gets off, with a lot of publicity behind it, to try and prevent GPT-5 from being released on existential risk grounds. It fails, and GPT-5 is released anyway , and literally nothing majorly bad happens. And then the same thing happens for gpt-6 and gpt-7.
In this scenario, the idea of pausing AI could easily become a laughing stock. Then when an actually dangerous AI comes out, the idea of pausing is still discredited, and you’re missing a tool when you really actually need it.
Even if I believed the risk of overall doom was 5% (way too high imo), I wouldn’t support the pause movement now, I’d want to wait on advocating a pause until there was a significant chance of imminent danger.
Yeah, I agree. I wrote about timing considerations here; I agree this is an important part of the discussion.