Can someone please explain why we’re still forecasting the weak AGI timeline? I thought “sparks” of AGI as Microsoft claimed GPT-4 achieved should already be more than the level of intelligence implied by “weak”.
The answer is that the question in question is not actually forecasting weak AGI, it’s forecasting these specific resolution criteria:
For these purposes we will thus define “AI system” as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.
Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
Be able to score 75th percentile (as compared to the corresponding year’s human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)
Be able to learn the classic Atari game “Montezuma’s revenge” (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)
Can someone please explain why we’re still forecasting the weak AGI timeline? I thought “sparks” of AGI as Microsoft claimed GPT-4 achieved should already be more than the level of intelligence implied by “weak”.
The answer is that the question in question is not actually forecasting weak AGI, it’s forecasting these specific resolution criteria: