The good: it sounds like you talked to a lot of people who were eager to hear a differing opinion.
The bad: it sounds like you talked to a lot of people who had never even heard a differing opinion before and hadn’t even considered that a differing opinion could exist.
I have to say, the bad part supports my observation!
When I talk about paying lip service to the idea of being open-minded vs. actually being open-minded, ultimately how you make that distinction is going to be influenced by what opinions you hold. I don’t think there is a 100% impartial, objective way of making that distinction.
What I have in mind in this context when I talk about lip service vs. actual open-mindedness is stuff like how a lot of people who believe in the scaling hypothesis and short AGI timelines have ridiculed and dismissed Yann LeCun (for example here, but also so many other times before that) for saying that autoregressive LLMs will never attain AGI. If you want to listen to a well-informed, well-qualified critic, you couldn’t ask for someone better than Yann LeCun, no? So, why is the response dismissal and ridicule rather than engaging with the substance of his arguments, “steelmanning”, and all that?
Also, when you set the two poles of the argument as people who have 1-year AGI timelines at one pole and people who have 20-year AGI timelines at the opposite pole, you really constrain the diversity of perspectives you are hearing. If you have vigorous debates with people who already broadly agree you on the broad strokes, you are hearing criticism about the details but not about the broad strokes. That’s a problem with insularity.
you couldn’t ask for someone better than Yann LeCun, no?
Really? I’ve never seen any substantive argument from LeCun. He mostly just presents very weak arguments (and ad hominem) on social media, that are falsified within months (e.g. his claims about LLMs not being able to world model). Please link to the best written one you know of.
I don’t think it’s a good idea to engage with criticism of an idea in the form of meme videos from Reddit designed to dunk on the critic. Is that intellectually healthy?
I don’t think the person who made that video or other people who want to dunk on Yann LeCun for that quote understand what he was trying to say. (Benjamin Todd recently made the same mistake here.) I think people are interpreting this quote hyper-literally and missing the broader point LeCun was trying to make.
Even today, in April 2025, models like GPT-4o and o3-mini don’t have a robust understanding of things like time, causality, and the physics of everyday objects. They will routinely tell you absurd things like that an event that happened in 2024 was caused by an event in 2025, while listing the dates of the events. Why don’t LLMs, still, in April 2025 consistently understand that causes precede effects and not vice versa?
If anything, this makes it seem like what LeCun said in January 2022 seem prescient. Despite a tremendous amount of scaling of training data and training compute, and, more recently, significant scaling of test-time compute, the same fundamental flaw LeCun called out over 3 years ago remains a flaw in the latest LLMs.
All that being said… I think even if LeCun had made the claim that I think people are mistakenly interpreting him as making and he had turned out to have been wrong about that, discrediting him based on him being wrong about that one thing would be ridiculously uncharitable.
I have to say, the bad part supports my observation!
Steven was responding to this:
The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the group’s norms), and closed-off to even reasonable, relatively gentle criticism
None of Steven’s bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
There were a number of people, all quite new to the fields of AI and AI safety / alignment, for whom it seems to have never crossed their mind until they talked to me that maybe foundation models won’t scale to AGI, and likewise who didn’t seem to realize that the field of AI is broader than just foundation models.
How could you possibly never encounter the view that “foundation models won’t scale to AGI”? How could an intellectually healthy community produce this outcome?
You had never thought through “whether artificial intelligence could be increasing faster than Moore’s law.” Should we conclude that AI risk skeptics are “insular, intolerant of disagreement or intellectual or social non-conformity (relative to the group’s norms), and closed-off to even reasonable, relatively gentle criticism?”
Thank you for sharing your experience.
The good: it sounds like you talked to a lot of people who were eager to hear a differing opinion.
The bad: it sounds like you talked to a lot of people who had never even heard a differing opinion before and hadn’t even considered that a differing opinion could exist.
I have to say, the bad part supports my observation!
When I talk about paying lip service to the idea of being open-minded vs. actually being open-minded, ultimately how you make that distinction is going to be influenced by what opinions you hold. I don’t think there is a 100% impartial, objective way of making that distinction.
What I have in mind in this context when I talk about lip service vs. actual open-mindedness is stuff like how a lot of people who believe in the scaling hypothesis and short AGI timelines have ridiculed and dismissed Yann LeCun (for example here, but also so many other times before that) for saying that autoregressive LLMs will never attain AGI. If you want to listen to a well-informed, well-qualified critic, you couldn’t ask for someone better than Yann LeCun, no? So, why is the response dismissal and ridicule rather than engaging with the substance of his arguments, “steelmanning”, and all that?
Also, when you set the two poles of the argument as people who have 1-year AGI timelines at one pole and people who have 20-year AGI timelines at the opposite pole, you really constrain the diversity of perspectives you are hearing. If you have vigorous debates with people who already broadly agree you on the broad strokes, you are hearing criticism about the details but not about the broad strokes. That’s a problem with insularity.
Really? I’ve never seen any substantive argument from LeCun. He mostly just presents very weak arguments (and ad hominem) on social media, that are falsified within months (e.g. his claims about LLMs not being able to world model). Please link to the best written one you know of.
I don’t think it’s a good idea to engage with criticism of an idea in the form of meme videos from Reddit designed to dunk on the critic. Is that intellectually healthy?
I don’t think the person who made that video or other people who want to dunk on Yann LeCun for that quote understand what he was trying to say. (Benjamin Todd recently made the same mistake here.) I think people are interpreting this quote hyper-literally and missing the broader point LeCun was trying to make.
Even today, in April 2025, models like GPT-4o and o3-mini don’t have a robust understanding of things like time, causality, and the physics of everyday objects. They will routinely tell you absurd things like that an event that happened in 2024 was caused by an event in 2025, while listing the dates of the events. Why don’t LLMs, still, in April 2025 consistently understand that causes precede effects and not vice versa?
If anything, this makes it seem like what LeCun said in January 2022 seem prescient. Despite a tremendous amount of scaling of training data and training compute, and, more recently, significant scaling of test-time compute, the same fundamental flaw LeCun called out over 3 years ago remains a flaw in the latest LLMs.
All that being said… I think even if LeCun had made the claim that I think people are mistakenly interpreting him as making and he had turned out to have been wrong about that, discrediting him based on him being wrong about that one thing would be ridiculously uncharitable.
Steven was responding to this:
None of Steven’s bullet points support this. Many of them say the exact opposite of this.
Unless I misinterpreted what Steven was trying to say, this supports my observation in the OP about insularity:
How could you possibly never encounter the view that “foundation models won’t scale to AGI”? How could an intellectually healthy community produce this outcome?
You had never thought through “whether artificial intelligence could be increasing faster than Moore’s law.” Should we conclude that AI risk skeptics are “insular, intolerant of disagreement or intellectual or social non-conformity (relative to the group’s norms), and closed-off to even reasonable, relatively gentle criticism?”
That seems like a non-sequitur and it seems like a calculated insult and not a good faith effort to engage in the substance of my argument.