The good: it sounds like you talked to a lot of people who were eager to hear a differing opinion.
The bad: it sounds like you talked to a lot of people who had never even heard a differing opinion before and hadnāt even considered that a differing opinion could exist.
I have to say, the bad part supports my observation!
When I talk about paying lip service to the idea of being open-minded vs. actually being open-minded, ultimately how you make that distinction is going to be influenced by what opinions you hold. I donāt think there is a 100% impartial, objective way of making that distinction.
What I have in mind in this context when I talk about lip service vs. actual open-mindedness is stuff like how a lot of people who believe in the scaling hypothesis and short AGI timelines have ridiculed and dismissed Yann LeCun (for example here, but also so many other times before that) for saying that autoregressive LLMs will never attain AGI. If you want to listen to a well-informed, well-qualified critic, you couldnāt ask for someone better than Yann LeCun, no? So, why is the response dismissal and ridicule rather than engaging with the substance of his arguments, āsteelmanningā, and all that?
Also, when you set the two poles of the argument as people who have 1-year AGI timelines at one pole and people who have 20-year AGI timelines at the opposite pole, you really constrain the diversity of perspectives you are hearing. If you have vigorous debates with people who already broadly agree you on the broad strokes, you are hearing criticism about the details but not about the broad strokes. Thatās a problem with insularity.
you couldnāt ask for someone better than Yann LeCun, no?
Really? Iāve never seen any substantive argument from LeCun. He mostly just presents very weak arguments (and ad hominem) on social media, that are falsified within months (e.g. his claims about LLMs not being able to world model). Please link to the best written one you know of.
I donāt think itās a good idea to engage with criticism of an idea in the form of meme videos from Reddit designed to dunk on the critic. Is that intellectually healthy?
I donāt think the person who made that video or other people who want to dunk on Yann LeCun for that quote understand what he was trying to say. (Benjamin Todd recently made the same mistake here.) I think people are interpreting this quote hyper-literally and missing the broader point LeCun was trying to make.
Even today, in April 2025, models like GPT-4o and o3-mini donāt have a robust understanding of things like time, causality, and the physics of everyday objects. They will routinely tell you absurd things like that an event that happened in 2024 was caused by an event in 2025, while listing the dates of the events. Why donāt LLMs, still, in April 2025 consistently understand that causes precede effects and not vice versa?
If anything, this makes it seem like what LeCun said in January 2022 seem prescient. Despite a tremendous amount of scaling of training data and training compute, and, more recently, significant scaling of test-time compute, the same fundamental flaw LeCun called out over 3 years ago remains a flaw in the latest LLMs.
All that being said⦠I think even if LeCun had made the claim that I think people are mistakenly interpreting him as making and he had turned out to have been wrong about that, discrediting him based on him being wrong about that one thing would be ridiculously uncharitable.
I have to say, the bad part supports my observation!
Steven was responding to this:
The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupās norms), and closed-off to even reasonable, relatively gentle criticism
None of Stevenās bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
There were a number of people, all quite new to the fields of AI and AI safety /ā alignment, for whom it seems to have never crossed their mind until they talked to me that maybe foundation models wonāt scale to AGI, and likewise who didnāt seem to realize that the field of AI is broader than just foundation models.
How could you possibly never encounter the view that āfoundation models wonāt scale to AGIā? How could an intellectually healthy community produce this outcome?
Thereās a popular mistake these days of assuming that LLMs are the entirety of AI, rather than a subfield of AI.
If you make this mistake, then you can go from there to either of two faulty conclusions:
(Faulty inference 1) Transformative AI will happen sooner or later [true IMO] THEREFORE LLMs will scale to TAI [false IMO]
(Faulty inference 2) LLMs will never scale to TAI [true IMO] THEREFORE TAI will never happen [false IMO]
I have seen an awful lot of both (1) and (2), including by e.g. CS professors who really ought to know better (example), and I try to call out both of them when I see them.
You yourself seem mildly guilty of something-like-(2), in this very post. Otherwise you would be asking questions like āhow quickly can AI paradigms go FROM obscure and unimpressive arxiv papers that nobody has heard of, TO a highly-developed technique subject to untold billions of dollars and millions of person-hours of investment?ā, and youād notice that an answer like ā5 yearsā is not out of the question. (See second half of this comment.)
You had never thought through āwhether artificial intelligence could be increasing faster than Mooreās law.ā Should we conclude that AI risk skeptics are āinsular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupās norms), and closed-off to even reasonable, relatively gentle criticism?ā
Thank you for sharing your experience.
The good: it sounds like you talked to a lot of people who were eager to hear a differing opinion.
The bad: it sounds like you talked to a lot of people who had never even heard a differing opinion before and hadnāt even considered that a differing opinion could exist.
I have to say, the bad part supports my observation!
When I talk about paying lip service to the idea of being open-minded vs. actually being open-minded, ultimately how you make that distinction is going to be influenced by what opinions you hold. I donāt think there is a 100% impartial, objective way of making that distinction.
What I have in mind in this context when I talk about lip service vs. actual open-mindedness is stuff like how a lot of people who believe in the scaling hypothesis and short AGI timelines have ridiculed and dismissed Yann LeCun (for example here, but also so many other times before that) for saying that autoregressive LLMs will never attain AGI. If you want to listen to a well-informed, well-qualified critic, you couldnāt ask for someone better than Yann LeCun, no? So, why is the response dismissal and ridicule rather than engaging with the substance of his arguments, āsteelmanningā, and all that?
Also, when you set the two poles of the argument as people who have 1-year AGI timelines at one pole and people who have 20-year AGI timelines at the opposite pole, you really constrain the diversity of perspectives you are hearing. If you have vigorous debates with people who already broadly agree you on the broad strokes, you are hearing criticism about the details but not about the broad strokes. Thatās a problem with insularity.
Really? Iāve never seen any substantive argument from LeCun. He mostly just presents very weak arguments (and ad hominem) on social media, that are falsified within months (e.g. his claims about LLMs not being able to world model). Please link to the best written one you know of.
I donāt think itās a good idea to engage with criticism of an idea in the form of meme videos from Reddit designed to dunk on the critic. Is that intellectually healthy?
I donāt think the person who made that video or other people who want to dunk on Yann LeCun for that quote understand what he was trying to say. (Benjamin Todd recently made the same mistake here.) I think people are interpreting this quote hyper-literally and missing the broader point LeCun was trying to make.
Even today, in April 2025, models like GPT-4o and o3-mini donāt have a robust understanding of things like time, causality, and the physics of everyday objects. They will routinely tell you absurd things like that an event that happened in 2024 was caused by an event in 2025, while listing the dates of the events. Why donāt LLMs, still, in April 2025 consistently understand that causes precede effects and not vice versa?
If anything, this makes it seem like what LeCun said in January 2022 seem prescient. Despite a tremendous amount of scaling of training data and training compute, and, more recently, significant scaling of test-time compute, the same fundamental flaw LeCun called out over 3 years ago remains a flaw in the latest LLMs.
All that being said⦠I think even if LeCun had made the claim that I think people are mistakenly interpreting him as making and he had turned out to have been wrong about that, discrediting him based on him being wrong about that one thing would be ridiculously uncharitable.
Steven was responding to this:
None of Stevenās bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
How could you possibly never encounter the view that āfoundation models wonāt scale to AGIā? How could an intellectually healthy community produce this outcome?
Thereās a popular mistake these days of assuming that LLMs are the entirety of AI, rather than a subfield of AI.
If you make this mistake, then you can go from there to either of two faulty conclusions:
(Faulty inference 1) Transformative AI will happen sooner or later [true IMO] THEREFORE LLMs will scale to TAI [false IMO]
(Faulty inference 2) LLMs will never scale to TAI [true IMO] THEREFORE TAI will never happen [false IMO]
I have seen an awful lot of both (1) and (2), including by e.g. CS professors who really ought to know better (example), and I try to call out both of them when I see them.
You yourself seem mildly guilty of something-like-(2), in this very post. Otherwise you would be asking questions like āhow quickly can AI paradigms go FROM obscure and unimpressive arxiv papers that nobody has heard of, TO a highly-developed technique subject to untold billions of dollars and millions of person-hours of investment?ā, and youād notice that an answer like ā5 yearsā is not out of the question. (See second half of this comment.)
Iām not sure how you define āimminentā in the OP title, but FWIW, LLM skeptic Yann LeCun says human-level AI āwill take several years if not a decadeā¦[but with] a long tailā, and LLM skeptic Francois Chollet says 2038-2048.
You had never thought through āwhether artificial intelligence could be increasing faster than Mooreās law.ā Should we conclude that AI risk skeptics are āinsular, intolerant of disagreement or intellectual or social non-conformity (relative to the groupās norms), and closed-off to even reasonable, relatively gentle criticism?ā
That seems like a non-sequitur and it seems like a calculated insult and not a good faith effort to engage in the substance of my argument.