In addition, o3 was also trained on the public data of ARC-AI, a dataset comprised of abstract visual reasoning problems in the style of Raven’s progressive matrices [52]. When combined with the large amount of targeted research this benchmark has attracted in recent years, the high scores achieved by o3 should not be considered a reliable metric of general reasoning capabilities.
This take seems to contradict Francois Chollet’s own write-up of the o3 ARC results, where he describes the results as:
a genuine breakthrough, marking a qualitative shift in AI capabilities compared to the prior limitations of LLMs. o3 is a system capable of adapting to tasks it has never encountered before
(taken from your reference 52 , emphasis mine)
You could write this off as him wanting to talk-up the significance of his own benchmark, but I’m not sure that would be right. He has been very publicly sceptical of the ability of LLMs to scale to general intelligence, so this is a kind of concession from him. And he had already laid the groundwork in his Dwarkesh Patel interview to explain away high ARC performance as cheating if it tackled the problem in the wrong way, cracking it through memorization via an alternative route (e.g. auto-generating millions of ARC-like problems and training on those). He could easily have dismissed the o3 results on those grounds, but chose not to, which made an impression on me (a non-expert trying to decide how to weigh up the opions of different experts). Presumably he is aware that o3 trained on the public dataset, and doesn’t view that as cheating. The public dataset is small, and the problems are explicitly designed to resist memorization, requiring general intelligence. Being told the solution to earlier problems is not supposed to help you solve later problems.
What’s your take on this? Do you disagree with the write up in [52]? Or do you think I’m mischaracterizing his position (there are plenty of caveats outside the bit I selectively quoted as well—so maybe I am)?
The fact that the human-level ARC performance could only be achieved by extremely high inference-time compute costs seems significant too. Why would we get inference time scaling if chain-of-thought consisted of not much more than post-hoc rationalizations, instead of real reasoning?
For context, I used to be pretty sympathetic to the “LLMs do most of the impressive stuff by memorization and are pretty terrible at novel tasks” position, and still think this is a good model for the non-reasoning LLMs, but my views have changed a lot since the reasoning models, particularly because of the ARC results.
I have read about some of the work on tackling the ARC dataset, and I am not at all confident that the approaches which perform well have anything to do with generalisable reasoning. The problem remains that there is no validation that the benchmark measures what it claims to. I don’t know what methods o3 used to solve it, but until I do I don’t believe the marketing hype released by OpenAI that it must be generalisable reasoning.
As to why we’d see inference time scaling if chain-of-thought consisted of not much more than post-hoc rationalizations, this is still an open question but it seems to be partly driven by increased compute time and number of tokens. I don’t have the full answer here, but the evidence we do have strongly cautions against just assuming these models are doing what we might describe as ‘genuine reasoning’.
This take seems to contradict Francois Chollet’s own write-up of the o3 ARC results, where he describes the results as:
(taken from your reference 52 , emphasis mine)
You could write this off as him wanting to talk-up the significance of his own benchmark, but I’m not sure that would be right. He has been very publicly sceptical of the ability of LLMs to scale to general intelligence, so this is a kind of concession from him. And he had already laid the groundwork in his Dwarkesh Patel interview to explain away high ARC performance as cheating if it tackled the problem in the wrong way, cracking it through memorization via an alternative route (e.g. auto-generating millions of ARC-like problems and training on those). He could easily have dismissed the o3 results on those grounds, but chose not to, which made an impression on me (a non-expert trying to decide how to weigh up the opions of different experts). Presumably he is aware that o3 trained on the public dataset, and doesn’t view that as cheating. The public dataset is small, and the problems are explicitly designed to resist memorization, requiring general intelligence. Being told the solution to earlier problems is not supposed to help you solve later problems.
What’s your take on this? Do you disagree with the write up in [52]? Or do you think I’m mischaracterizing his position (there are plenty of caveats outside the bit I selectively quoted as well—so maybe I am)?
The fact that the human-level ARC performance could only be achieved by extremely high inference-time compute costs seems significant too. Why would we get inference time scaling if chain-of-thought consisted of not much more than post-hoc rationalizations, instead of real reasoning?
For context, I used to be pretty sympathetic to the “LLMs do most of the impressive stuff by memorization and are pretty terrible at novel tasks” position, and still think this is a good model for the non-reasoning LLMs, but my views have changed a lot since the reasoning models, particularly because of the ARC results.
Hi Toby, thanks for the comment.
I have read about some of the work on tackling the ARC dataset, and I am not at all confident that the approaches which perform well have anything to do with generalisable reasoning. The problem remains that there is no validation that the benchmark measures what it claims to. I don’t know what methods o3 used to solve it, but until I do I don’t believe the marketing hype released by OpenAI that it must be generalisable reasoning.
As to why we’d see inference time scaling if chain-of-thought consisted of not much more than post-hoc rationalizations, this is still an open question but it seems to be partly driven by increased compute time and number of tokens. I don’t have the full answer here, but the evidence we do have strongly cautions against just assuming these models are doing what we might describe as ‘genuine reasoning’.