Note that the definition of “fully general AI” on that Metaculus question is considerably weaker than how Open Phil talks about “transformative AI.”
For these purposes we will thus define “an artificial general intelligence” as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.
Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
Be able to score 75th percentile (as compared to the corresponding year’s human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)
Be able to learn the classic Atari game “Montezuma’s revenge” (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)
Right, to be clear I think this is (mostly) not your fault.
Unfortunately others have made this and similar mistakes before, for both other questions and this specific question.
Obviously some of the onus is on user error, but I think the rest of us (the forecasting community and the Metaculus platform) should do better on having the intuitive interpretation of the headline question match the question specifications, and vice versa.
Note that the definition of “fully general AI” on that Metaculus question is considerably weaker than how Open Phil talks about “transformative AI.”
Thanks, I didn‘t read that carefully enough!
Right, to be clear I think this is (mostly) not your fault.
Unfortunately others have made this and similar mistakes before, for both other questions and this specific question.
Obviously some of the onus is on user error, but I think the rest of us (the forecasting community and the Metaculus platform) should do better on having the intuitive interpretation of the headline question match the question specifications, and vice versa.