Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights.
This paragraph seems too weak for how important it is in the argument. Notably, I doubt we’ll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.
That we have several years before these become maybes to me is quite an important part of why I don’t advocate for pausing now.
This paragraph seems too weak for how important it is in the argument. Notably, I doubt we’ll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.
The important question for the argument is whether GPT-6 will pose an unacceptable risk.
The main message of this post is that current PauseAI protest’s primary purpose is to build momentum for a later point.
This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I’m not sure.
This paragraph seems too weak for how important it is in the argument. Notably, I doubt we’ll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.
That we have several years before these become maybes to me is quite an important part of why I don’t advocate for pausing now.
I respect the unpopularity of protesting though.
I also doubt it, but I am not 1 in 10,000 confident.
The important question for the argument is whether GPT-6 will pose an unacceptable risk.
Is it, are PauseAI clear that they think GPT5 will almost certainly be fine.
The main message of this post is that current PauseAI protest’s primary purpose is to build momentum for a later point.
This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I’m not sure.
Idk, I’d put GPT-5 at a ~1% x-risk, or crossing-the-point-of-no-return risk (unacceptably high).