This paragraph seems too weak for how important it is in the argument. Notably, I doubt we’ll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.
The important question for the argument is whether GPT-6 will pose an unacceptable risk.
The main message of this post is that current PauseAI protest’s primary purpose is to build momentum for a later point.
This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I’m not sure.
The important question for the argument is whether GPT-6 will pose an unacceptable risk.
Is it, are PauseAI clear that they think GPT5 will almost certainly be fine.
The main message of this post is that current PauseAI protest’s primary purpose is to build momentum for a later point.
This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I’m not sure.
Idk, I’d put GPT-5 at a ~1% x-risk, or crossing-the-point-of-no-return risk (unacceptably high).