Executive summary: François Chollet and Dwarkesh Patel discuss key cruxes in the debate over whether scaling current AI approaches will lead to AGI, with Chollet arguing that more is needed beyond scaling and Patel pushing back on some of Chollet’s claims.
Key points:
Chollet introduces the ARC Challenge as a test of general intelligence that current large language models (LLMs) struggle with, despite the tasks being simple for humans.
Chollet distinguishes between narrow “skill” and general “intelligence”, arguing that LLMs are doing sophisticated memorization and interpolation rather than reasoning and generalization.
Patel counters that with enough scale, interpolation could lead to general intelligence, and that the missing pieces beyond scaling may be relatively easy.
Chollet thinks the hard parts of intelligence, like active inference and discrete program synthesis, are not addressed by the current scaling paradigm.
The author believes Chollet makes a compelling case, and that if he is right it should significantly update people’s views on AI risk and the value of current AI safety work.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: François Chollet and Dwarkesh Patel discuss key cruxes in the debate over whether scaling current AI approaches will lead to AGI, with Chollet arguing that more is needed beyond scaling and Patel pushing back on some of Chollet’s claims.
Key points:
Chollet introduces the ARC Challenge as a test of general intelligence that current large language models (LLMs) struggle with, despite the tasks being simple for humans.
Chollet distinguishes between narrow “skill” and general “intelligence”, arguing that LLMs are doing sophisticated memorization and interpolation rather than reasoning and generalization.
Patel counters that with enough scale, interpolation could lead to general intelligence, and that the missing pieces beyond scaling may be relatively easy.
Chollet thinks the hard parts of intelligence, like active inference and discrete program synthesis, are not addressed by the current scaling paradigm.
The author believes Chollet makes a compelling case, and that if he is right it should significantly update people’s views on AI risk and the value of current AI safety work.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.