I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?
I don’t know whether GPT-6 or GPT-7 will be able to design the next version. I could see it being possible if “designing the next version” just meant cranking up the compute knob and automating the data extraction and training process. But I suspect this would lead to diminishing returns and disappointing results. I find it unlikely that any of the next few versions would make algorithmic breakthroughs, unless it’s structure and training was drastically changed.
You don’t expect any qualitative leaps in intelligence from orders of magnitude larger models? Even GPT-3.5->GPT-4 was a big jump (much higher grades on university-level exams). Do you think humans are close to the limit in terms of physically possible intelligence?