Great post, Jeffrey! I had been having thoughts along these lines, so I am glad there is now a post I can point to!
In my mind, a long pause should also be conditional on safety levels, i.e. the pause is not just for the sake of pausing. However, I would say such safety levels should be quite high, because non-AI risks are manageble without AI, and are often exagerated (although I also believe risk from AI is exagerated).
Great post, Jeffrey! I had been having thoughts along these lines, so I am glad there is now a post I can point to!
In my mind, a long pause should also be conditional on safety levels, i.e. the pause is not just for the sake of pausing. However, I would say such safety levels should be quite high, because non-AI risks are manageble without AI, and are often exagerated (although I also believe risk from AI is exagerated).