GPT-4 is being used to speed up development of GPT-5 already. If GPT-5 can make GPT-6 on it’s own, it could then spiral to an unstoppable superintelligence. One with arbitrary goals that are incompatible with carbon-based life. How confident are we that this can’t happen? You’re right that they could do more to explain this in the letter. But I think broad appeal is what they were targeting (hence mention of other lesser concerns like job automation etc).
We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming
I don’t think “we used GPT to write a sales pitch” is evidence of an impending intelligence explosion. And having used GPT for programming myself, it’s mostly a speedup mechanism that still makes plenty of errors. It substitutes for the the tedious part of coding which is currently done by googling on stack exchange, not the high level designing tasks.
The chance of “gpt-5 making gpt-6 on it’s own” is approximately 0%. GPT is trained to predict text, not to build chatbots.
It makes the project go somewhat faster, but from the software people I’ve talked to, not by that much. there are plenty of other bottlenecks in the development process. For example, the “human reinforcement” part of the process is necessarily on a human scale, even if AI can speed things up around the edges.
Replicating something that already exists is easy. A printer can “replicate” gpt-4. What you were describing is a completely autonomous upgrade to something new and superior. That is what I ascribe a ~0% chance of gpt-5 achieving.
I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?
GPT-4 is being used to speed up development of GPT-5 already. If GPT-5 can make GPT-6 on it’s own, it could then spiral to an unstoppable superintelligence. One with arbitrary goals that are incompatible with carbon-based life. How confident are we that this can’t happen? You’re right that they could do more to explain this in the letter. But I think broad appeal is what they were targeting (hence mention of other lesser concerns like job automation etc).
To quote the linked text:
I don’t think “we used GPT to write a sales pitch” is evidence of an impending intelligence explosion. And having used GPT for programming myself, it’s mostly a speedup mechanism that still makes plenty of errors. It substitutes for the the tedious part of coding which is currently done by googling on stack exchange, not the high level designing tasks.
The chance of “gpt-5 making gpt-6 on it’s own” is approximately 0%. GPT is trained to predict text, not to build chatbots.
Right, I’m thinking the same. But that is still freeing up research engineer time, making the project go faster.
Mesaoptimisation and Basic AI Drives are dangers here. And GPT-4 isn’t all that far off being capable of replicating itself autonomously when instructed to do so.
It makes the project go somewhat faster, but from the software people I’ve talked to, not by that much. there are plenty of other bottlenecks in the development process. For example, the “human reinforcement” part of the process is necessarily on a human scale, even if AI can speed things up around the edges.
Replicating something that already exists is easy. A printer can “replicate” gpt-4. What you were describing is a completely autonomous upgrade to something new and superior. That is what I ascribe a ~0% chance of gpt-5 achieving.
A printer can’t run GPT-4. What about GPT-6 or GPT-7?
I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?