Re your point about scaling, the Michael et al survey of NLP researchers suggests that researchers don’t think scaling will take us all the way there.
Figure 4.
Based on my limited understanding I agree with you that it seems pretty plausible that scaling does take us to human level AI and beyond, but the experts seem to disagree and I’m not sure why
Interesting. I’ll note that this survey was pre-GPT-4 (or even before GPT3.5 was in widespread use? May-June 2022) when (I think) people were still sceptical of LLMs being able to do well on university exams, amongst many other things. Would be interesting to see a similar survey that is post-GPT-4 (I’ve not been able to find anything). I predict that it will show a significantly higher % agreeing.
In general I think any survey on AI that was conducted in the pre-GPT-4 era is now woefully out of date.
Thanks for this post Greg.
Re your point about scaling, the Michael et al survey of NLP researchers suggests that researchers don’t think scaling will take us all the way there.
Figure 4.
Based on my limited understanding I agree with you that it seems pretty plausible that scaling does take us to human level AI and beyond, but the experts seem to disagree and I’m not sure why
Interesting. I’ll note that this survey was pre-GPT-4 (or even before GPT3.5 was in widespread use? May-June 2022) when (I think) people were still sceptical of LLMs being able to do well on university exams, amongst many other things. Would be interesting to see a similar survey that is post-GPT-4 (I’ve not been able to find anything). I predict that it will show a significantly higher % agreeing.
In general I think any survey on AI that was conducted in the pre-GPT-4 era is now woefully out of date.
I would agree, relying on pre-GPT4 estimates seems flawed.
Hmm… that isn’t exactly the question I’d like the answer too, which is more scaling + minor incremental improvements + creative prompting.