-Most AI experts are skeptical that scaling up LLMs could lead to AGI.
I don’t think this is true. Do you have a source? My guess is that I wouldn’t consider many of the people “experts”.
-It seems like there are deep, fundamental scientific discoveries and breakthroughs that would need to be made for building AGI to become possible. There is no evidence we’re on the cusp of those happening and it seems like they could easily take many decades.
I think this is a pretty strange take. It seems like basically all progress on AI has involved approximately 0 “deep, fundamental scientific discoveries”, so I think you need some argument for why the trend will change. Alternatively, if you think we have made lots of discoveries and that explains AI progress so far, then you need an argument for why these discoveries will stop. Or, if you think we have made little AI progress since ~2010 then I think most readers would strongly disagree with you.
…we also wanted to include the opinion
of the entire AAAI community, so we launched
an extensive survey on the topics of the study,
which engaged 475 respondents, of which about
20% were students. Among the respondents,
academia was given as the main affiliation (67%),
followed by corporate research environment
(19%). Geographically, the most represented areas
are North America (53%), Asia (20%), and Europe
(19%) . While the vast majority of the respondents
listed AI as one of their primary fields of study,
there were also mentions of other fields, such
as neuroscience, medicine, biology, sociology,
philosophy, political science, and economics.
Page 63 discusses the question about scaling:
The majority of respondents (76%) assert that
“scaling up current AI approaches” to yield AGI is
“unlikely” or “very unlikely” to succeed, suggesting
doubts about whether current machine
learning paradigms are sufficient for achieving
general intelligence.
I have sources for the other specific claims made in the post as well and will provide them on request, but they also should be pretty easy to look up.
Thanks for the link, I haven’t come across that report before.
I think Yann has pretty atypical views for people working on LMs. For example, if you take the reference classes of AI-related Turing award winners or Chief scientist types at AI labs, most are far more bullish on LMs (e.g., Hinton, Bengio, Ilya, Jared Kaplan, Schulman).
The community of people most focused on keeping up the drumbeat of near-term AGI predictions seems insular, intolerant of disagreement or intellectual or social non-conformity (relative to the group’s norms), and closed-off to even reasonable, relatively gentle criticism (whether or not they pay lip service to listening to criticism or perform being open-minded). It doesn’t feel like a scientific community. It feels more like a niche subculture. It seems like a group of people just saying increasingly small numbers to each other (10 years, 5 years, 3 years, 2 years), hyping each other up (either with excitement or anxiety), and reinforcing each other’s ideas all the time. It doesn’t seem like an intellectually healthy community.
My impression is that a lot of people who believe in short AGI timelines (e.g. AGI by January 1, 2030) and who believe in some strong version of the scaling hypothesis (e.g. LLMs will scale to AGI with relatively minor fundamental changes but with greatly increased training compute, inference compute, and/or training data) are in an echo chamber where they just reinforce each other’s ideas all the time.
What might look like vigorous disagreement is, in many cases, when you zoom out, people with broadly similar views arguing around the margins (e.g. AGI in 3 years vs. 7 years; minimal non-scaling innovations on LLMs vs. modest non-scaling innovations on LLMs).
If people stop to briefly consider what a well-informed critic like Yann LeCun has to say about the topic, it’s usually to make fun of him and move on.
It will seem more obvious that you’re right if the people you choose to listen to are the people who broadly agree with you and if you meet well-informed disagreement from people like Yann Lecun or François Chollet with dismissal, ridicule, or hostility. This is a recipe for overconfidence. Taken to an extreme, this approach can lead people down a path where they end up deeply misguided.
-Most AI experts are skeptical that scaling up LLMs could lead to AGI.
I don’t think this is true. Do you have a source? My guess is that I wouldn’t consider many of the people “experts”.
-It seems like there are deep, fundamental scientific discoveries and breakthroughs that would need to be made for building AGI to become possible. There is no evidence we’re on the cusp of those happening and it seems like they could easily take many decades.
I think this is a pretty strange take. It seems like basically all progress on AI has involved approximately 0 “deep, fundamental scientific discoveries”, so I think you need some argument for why the trend will change. Alternatively, if you think we have made lots of discoveries and that explains AI progress so far, then you need an argument for why these discoveries will stop. Or, if you think we have made little AI progress since ~2010 then I think most readers would strongly disagree with you.
The source is a report from the Association for the Advancement of Artificial Intelligence (AAAI): https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf
Page 7 discusses who they surveyed:
Page 63 discusses the question about scaling:
I have sources for the other specific claims made in the post as well and will provide them on request, but they also should be pretty easy to look up.
I think it’s a pretty normal take. If you want to hear the version from a person who won a Turing Award for their contributions to AI, listen to Yann LeCun talk about it. Here’s a recent representative example: https://www.pymnts.com/artificial-intelligence-2/2025/meta-large-language-models-will-not-get-to-human-level-intelligence/
He’s given lots of talks and interviews where he goes into detail.
Thanks for the link, I haven’t come across that report before.
I think Yann has pretty atypical views for people working on LMs. For example, if you take the reference classes of AI-related Turing award winners or Chief scientist types at AI labs, most are far more bullish on LMs (e.g., Hinton, Bengio, Ilya, Jared Kaplan, Schulman).
Let me repeat something I said in the OP:
My impression is that a lot of people who believe in short AGI timelines (e.g. AGI by January 1, 2030) and who believe in some strong version of the scaling hypothesis (e.g. LLMs will scale to AGI with relatively minor fundamental changes but with greatly increased training compute, inference compute, and/or training data) are in an echo chamber where they just reinforce each other’s ideas all the time.
What might look like vigorous disagreement is, in many cases, when you zoom out, people with broadly similar views arguing around the margins (e.g. AGI in 3 years vs. 7 years; minimal non-scaling innovations on LLMs vs. modest non-scaling innovations on LLMs).
If people stop to briefly consider what a well-informed critic like Yann LeCun has to say about the topic, it’s usually to make fun of him and move on.
It will seem more obvious that you’re right if the people you choose to listen to are the people who broadly agree with you and if you meet well-informed disagreement from people like Yann Lecun or François Chollet with dismissal, ridicule, or hostility. This is a recipe for overconfidence. Taken to an extreme, this approach can lead people down a path where they end up deeply misguided.