I agree that itâs a significant milestone, or at least it might be. I just read this comment a few hours ago (and the Twitter thread it links to) and that dampens my enthusiasm. 43 million words to solve one ARC-AGI-1 puzzle is a lot.
Also, I want to understand more about how ARC-AGI-2 is different from ARC-AGI-1. Chollet has said that about half of the tasks in ARC-AGI-1 turned out to be susceptible to âbrute forceâ-type approaches. I donât know what that means.
I think itâs easy to get carried away with the implications of a result like this when youâre surrounded by so many voices saying that AGI is coming within 5 years or within 10 years.
My response to François Cholletâs comments on o3â˛s high score on ARC-AGI-1 was more like, âOh, thatâs really interesting!â rather than making some big change to my views on AGI. I have to say, I was more excited about it before I knew it took 43 million words of text and over 1,000 attempts per task.
I still think no one knows how to build AGI and that (not unrelatedly) we donât know when AGI will be built.
Chollet recently started a new company focused on combining deep learning and program synthesis. Thatâs interesting. He seems to think the major AI labs like OpenAI and Google DeepMind are also working on program synthesis, but I donât know how much publicly available evidence there is for this.
I can add Cholletâs company to the list of organizations that I know of that have publicly discussed theyâre doing R&D related to AGI other than just scaling LLMs. The others I know of:
The Alberta Machine Intelligence Institute and Keen Technologies, both organizations where Richard Sutton is a key person and which (if I understand correctly) are pursuing at least to some extent Suttonâs âAlberta Plan for AI Researchâ
Numenta, a company co-founded by Jeff Hawkins, who has made aggressive statements about Numentaâs ability to develop AGI in the not-too-distant future using insights from neuroscience (the main insights they think theyâve found are described here)
Yann LeCunâs team at Meta AI, formerly FAIR; LeCun has published a roadmap to AGI, except he doesnât call it AGI
I might be forgetting one or two. I know in the past Demis Hassabis has made some general comments about DeepMindâs research related to AGI, but I donât know of any specifics.
My gut sense is that all of these approaches will fail â program synthesis combined with deep learning, the Alberta Plan, Numentaâs Thousand Brains Principles, and Yann LeCunâs roadmap. But this is just a random gut intuition and not a serious, considered opinion.
I think the idea that weâre barreling toward the imminent, inevitable invention of AGI is wrong. The idea is that AGI is so easy to invent and progress is happening so fast and so spontaneously that we can hardly stop ourselves from inventing AGI.
It would be seen as odd to take this view in any other area of technology, probably even among effective altruists. We would be lucky if we were barreling toward imminent, inevitable nuclear fusion or a universal coronavirus vaccine or a cure for cancer or any number of technologies that donât exist yet that weâd love to have.
Why does no one claim these technologies are being developed so spontaneously, so automatically, that we would have to take serious action to prevent them from being invented soon? Why is the attitude that progress is hard, success is uncertain, and the road is long?
Given thatâs how technology usually works, and I donât see any reason for AGI to be easier or take less time â in fact, it seems like it should be harder and take longer, since the science of intelligence and cognition is among the least understood areas of science â Iâm inclined to guess that most approaches will fail.
Even if the right general approach is found, it could take a very long time to figure out how to actually make concrete progress using that approach. (By analogy, many of the general ideas behind deep learning existed for decades before deep learning started to take off around 2012.)
Iâm interested in Cholletâs interpretation of the o3 results on ARC-AGI-1 and if there is a genuine, fundamental advancement involved (which today, after finding out those details about o3â˛s attempts, I believe less than I did yesterday) then thatâs exciting. But only moderately exciting because the advancement is only incremental.
The story that AGI is imminent and if we skirt disaster, weâll land in utopia is exciting and engaging. I think we live in a more boring version of reality (but still, all things considered, a pretty interesting one!) where weâre still at the drawing board stage for AGI, people are pitching different ideas (e.g., program synthesis, the Alberta Plan, the Thousand Brain Principles, energy-based self-supervised learning), the way forward is unclear, and weâre mostly in the dark about the fundamental nature of intelligence and cognition. Who knows how long it will take us to figure it out.
I agree that itâs a significant milestone, or at least it might be. I just read this comment a few hours ago (and the Twitter thread it links to) and that dampens my enthusiasm. 43 million words to solve one ARC-AGI-1 puzzle is a lot.
Also, I want to understand more about how ARC-AGI-2 is different from ARC-AGI-1. Chollet has said that about half of the tasks in ARC-AGI-1 turned out to be susceptible to âbrute forceâ-type approaches. I donât know what that means.
I think itâs easy to get carried away with the implications of a result like this when youâre surrounded by so many voices saying that AGI is coming within 5 years or within 10 years.
My response to François Cholletâs comments on o3â˛s high score on ARC-AGI-1 was more like, âOh, thatâs really interesting!â rather than making some big change to my views on AGI. I have to say, I was more excited about it before I knew it took 43 million words of text and over 1,000 attempts per task.
I still think no one knows how to build AGI and that (not unrelatedly) we donât know when AGI will be built.
Chollet recently started a new company focused on combining deep learning and program synthesis. Thatâs interesting. He seems to think the major AI labs like OpenAI and Google DeepMind are also working on program synthesis, but I donât know how much publicly available evidence there is for this.
I can add Cholletâs company to the list of organizations that I know of that have publicly discussed theyâre doing R&D related to AGI other than just scaling LLMs. The others I know of:
The Alberta Machine Intelligence Institute and Keen Technologies, both organizations where Richard Sutton is a key person and which (if I understand correctly) are pursuing at least to some extent Suttonâs âAlberta Plan for AI Researchâ
Numenta, a company co-founded by Jeff Hawkins, who has made aggressive statements about Numentaâs ability to develop AGI in the not-too-distant future using insights from neuroscience (the main insights they think theyâve found are described here)
Yann LeCunâs team at Meta AI, formerly FAIR; LeCun has published a roadmap to AGI, except he doesnât call it AGI
I might be forgetting one or two. I know in the past Demis Hassabis has made some general comments about DeepMindâs research related to AGI, but I donât know of any specifics.
My gut sense is that all of these approaches will fail â program synthesis combined with deep learning, the Alberta Plan, Numentaâs Thousand Brains Principles, and Yann LeCunâs roadmap. But this is just a random gut intuition and not a serious, considered opinion.
I think the idea that weâre barreling toward the imminent, inevitable invention of AGI is wrong. The idea is that AGI is so easy to invent and progress is happening so fast and so spontaneously that we can hardly stop ourselves from inventing AGI.
It would be seen as odd to take this view in any other area of technology, probably even among effective altruists. We would be lucky if we were barreling toward imminent, inevitable nuclear fusion or a universal coronavirus vaccine or a cure for cancer or any number of technologies that donât exist yet that weâd love to have.
Why does no one claim these technologies are being developed so spontaneously, so automatically, that we would have to take serious action to prevent them from being invented soon? Why is the attitude that progress is hard, success is uncertain, and the road is long?
Given thatâs how technology usually works, and I donât see any reason for AGI to be easier or take less time â in fact, it seems like it should be harder and take longer, since the science of intelligence and cognition is among the least understood areas of science â Iâm inclined to guess that most approaches will fail.
Even if the right general approach is found, it could take a very long time to figure out how to actually make concrete progress using that approach. (By analogy, many of the general ideas behind deep learning existed for decades before deep learning started to take off around 2012.)
Iâm interested in Cholletâs interpretation of the o3 results on ARC-AGI-1 and if there is a genuine, fundamental advancement involved (which today, after finding out those details about o3â˛s attempts, I believe less than I did yesterday) then thatâs exciting. But only moderately exciting because the advancement is only incremental.
The story that AGI is imminent and if we skirt disaster, weâll land in utopia is exciting and engaging. I think we live in a more boring version of reality (but still, all things considered, a pretty interesting one!) where weâre still at the drawing board stage for AGI, people are pitching different ideas (e.g., program synthesis, the Alberta Plan, the Thousand Brain Principles, energy-based self-supervised learning), the way forward is unclear, and weâre mostly in the dark about the fundamental nature of intelligence and cognition. Who knows how long it will take us to figure it out.
Interesting, thanks!