I agree there’s logical space for something less than less than AGI making the investments rational, but I think the gap between that and full AGI is pretty small. Peculiarity of my own world model though, so not something to bank on.
My interpretation of the survey responses is selecting “unlikely” when there are also “not sure” and “very unlikely” options suggests substantial probability (i.e. > 10%) on the part of the respondents who say “unlikely,” or “don’t know.” Reasonable uncertainty is all you need to justify work on something so important if-true and the cited survey seems to provide that.
People vary a lot in how they interpret terms like “unlikely” or “very unlikely” in % terms, so I think >10% is not all that obvious. But I agree that it is evidence they don’t think the whole idea is totally stupid, and that a relatively low probability of near-term AGI is still extremely worth worrying about.
The majority of respondents (76%)
assert that “scaling up current AI
approaches” to yield AGI is “unlikely” or
“very unlikely” to succeed, suggesting
doubts about whether current machine
learning paradigms are sufficient for achieving general intelligence.
I frequently shorthand this to a belief that LLMs won’t scale to AGI, but the question is actually broader and encompasses all current AI approaches.
Also relevant for this discussion: pages 64 and 65 of the report describe some of the fundamental research challenges that currently exist in AI capabilities. I can’t emphasize the importance of this enough. It is easy to think a problem like AGI is closer to being solved than it really is when you haven’t explored the subproblems involved or the long history of AI researchers trying and failing to solve those subproblems.
In my observation, people in EA greatly overestimate progress on AI capabilities. For example, many people seem to believe that autonomous driving is a solved problem, when this isn’t close to being true. Natural language processing has made leaps and bounds over the last seven years, but the progress in computer vision has been quite anemic by comparison. Many fundamental research problems have seen basically no progress, or very little.
I also think many people in EA overestimate the abilities of LLMs, anthropomorphizing the LLM and interpreting its outputs as evidence of deeper cognition, while also making excuses and hand-waving away the mistakes and failures — which, when it’s possible to do so, are often manually fixed using a lot of human labour by annotators.
I think people in EA need to update on:
Current AI capabilities being significantly less than they thought (e.g. with regard to autonomous driving and LLMs)
Progress in AI capabilities being significantly less than they thought, especially outside of natural language processing (e.g. computer vision, reinforcement learning) and especially on fundamental research problems
The number of fundamental research problems and how thorny they are, how much time, effort, and funding has already been spent on trying to solve them, and how little success has been achieved so far
I agree there’s logical space for something less than less than AGI making the investments rational, but I think the gap between that and full AGI is pretty small. Peculiarity of my own world model though, so not something to bank on.
My interpretation of the survey responses is selecting “unlikely” when there are also “not sure” and “very unlikely” options suggests substantial probability (i.e. > 10%) on the part of the respondents who say “unlikely,” or “don’t know.” Reasonable uncertainty is all you need to justify work on something so important if-true and the cited survey seems to provide that.
People vary a lot in how they interpret terms like “unlikely” or “very unlikely” in % terms, so I think >10% is not all that obvious. But I agree that it is evidence they don’t think the whole idea is totally stupid, and that a relatively low probability of near-term AGI is still extremely worth worrying about.
I should link the survey directly here: https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-FINAL.pdf
The relevant question is described on page 66:
I frequently shorthand this to a belief that LLMs won’t scale to AGI, but the question is actually broader and encompasses all current AI approaches.
Also relevant for this discussion: pages 64 and 65 of the report describe some of the fundamental research challenges that currently exist in AI capabilities. I can’t emphasize the importance of this enough. It is easy to think a problem like AGI is closer to being solved than it really is when you haven’t explored the subproblems involved or the long history of AI researchers trying and failing to solve those subproblems.
In my observation, people in EA greatly overestimate progress on AI capabilities. For example, many people seem to believe that autonomous driving is a solved problem, when this isn’t close to being true. Natural language processing has made leaps and bounds over the last seven years, but the progress in computer vision has been quite anemic by comparison. Many fundamental research problems have seen basically no progress, or very little.
I also think many people in EA overestimate the abilities of LLMs, anthropomorphizing the LLM and interpreting its outputs as evidence of deeper cognition, while also making excuses and hand-waving away the mistakes and failures — which, when it’s possible to do so, are often manually fixed using a lot of human labour by annotators.
I think people in EA need to update on:
Current AI capabilities being significantly less than they thought (e.g. with regard to autonomous driving and LLMs)
Progress in AI capabilities being significantly less than they thought, especially outside of natural language processing (e.g. computer vision, reinforcement learning) and especially on fundamental research problems
The number of fundamental research problems and how thorny they are, how much time, effort, and funding has already been spent on trying to solve them, and how little success has been achieved so far