It makes the project go somewhat faster, but from the software people I’ve talked to, not by that much. there are plenty of other bottlenecks in the development process. For example, the “human reinforcement” part of the process is necessarily on a human scale, even if AI can speed things up around the edges.
Replicating something that already exists is easy. A printer can “replicate” gpt-4. What you were describing is a completely autonomous upgrade to something new and superior. That is what I ascribe a ~0% chance of gpt-5 achieving.
I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?
Right, I’m thinking the same. But that is still freeing up research engineer time, making the project go faster.
Mesaoptimisation and Basic AI Drives are dangers here. And GPT-4 isn’t all that far off being capable of replicating itself autonomously when instructed to do so.
It makes the project go somewhat faster, but from the software people I’ve talked to, not by that much. there are plenty of other bottlenecks in the development process. For example, the “human reinforcement” part of the process is necessarily on a human scale, even if AI can speed things up around the edges.
Replicating something that already exists is easy. A printer can “replicate” gpt-4. What you were describing is a completely autonomous upgrade to something new and superior. That is what I ascribe a ~0% chance of gpt-5 achieving.
A printer can’t run GPT-4. What about GPT-6 or GPT-7?
I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?