Quick Polls on AI Timelines
I think it would be useful to get a feel for Forum usersā AI timelines. There are three questions, two of which are designed to align with questions on a LessWrong survey (from 2023). They are roughly year of artificial general intelligence, singularity (variously defined as cannot predict beyond, super exponential or explosive growth of the economy, etc.), and ācrazy.ā Feel free to define ācrazyā as you wish, but some possibilities could be greater than 20% unemployment in most countries, widespread political unrest, widespread loss of confidence in what is true, widespread economic growth exceeding 10% per year[1], your personal plans being disrupted by something related to AI, etc. It would be interesting to see in the comments how people define this. Please use the median year of your distribution (an even chance of happening before or after).
There are 21 locations on each poll, and they correspond to these years (if you comment, it would be helpful for you to put the year in, as the automatic description is not very helpful):
2026
2027
2028
2029
2030
2032
2035
2037
2040
2045
2050 (the middle of the poll range)
2060
2070
2080
2100
2125
2150
2200
2300
later
never
By what year do you think AI will be able to do intellectual tasks that expert humans currently do?
By what year do you think the singularity will occur?
By what year do you think the world will get crazy?
Edit: Now that the polls have closed, I thought I would offer some commentary. The median years of AGI, singularity, and crazy are 2035, 2038.5 and 2035, respectively (if options were continuous, it looks like AGI median would have been ~2034). LessWrong was 2030 for AGI (sooner) and 2040 for singularity (later). About 15% expect AGI and separately crazy by 2030 or sooner. One thing that surprised me was that by medians, people expected crazy ~1 year after AGI (~3 years after when I matched individual forecasts). Whereas I am expecting crazy 6 years before AGI, partly because of Epoch modeling indicating 10% global economic growth rate before 2029, much before full automation (and I think AGI).
I suspect that EAs working on AI were more likely to vote, and that they would have shorter timelines, so the timelines across all EAs are actually longer, and the timelines of only AI safety EAs would be shorter.
- ^
Not recovering from a recession.
- SumĀmary of AGI Pollsāand Questions by (7 Jan 2026 0:52 UTC; 13 points)
- 's comment on Even afĀter GPT-4, AI reĀsearchers foreĀcasted a 50% chance of AGI by 2047 or 2116, deĀpendĀing how you define AGI by (23 Dec 2025 21:38 UTC; 4 points)
- 's comment on How I hope the EA comĀmuĀnity will reĀspond to the AI bubĀble popping by (27 Dec 2025 3:45 UTC; -2 points)
This is a cool post, though I think itās kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart.
Yeah perhaps this is a feature for polls v3 (v2 is almost done).
To clarify, does our ācrazyā vote consider all possible causes of crazy, or just crazy that is caused by /ā significantly associated with AI?
Good question. For personal planning purposes, I think all causes would make sense. But the title is AI, so maybe just significantly associated with AI? I think these polls are about how the future is different because of AI.
I genuinely donāt know how to answer these polls. I find it much easier to think about a shorter timeframe like the next 20 years ā although even just that is hard enough ā rather than try to predict the future over a timespan of 275+ years.
I find it much easier to say that the creation of AGI (specifically as I define it here, since some people even call o3 āAGIā) is extremely unlikely by 2035 (i.e., much less than a 0.01% or 1 in 10,000 chance), let alone the Singularity.
(āCrazyā seems hazy, to the point that it probably needs to be decomposed into multiple different questions to make it a true forecast ā although I can respect just asking people for a vague vibe just as a casual exercise, even though it wonāt resolve unambiguously in retrospect.)
My problem with trying to put a median year on AGI is that I have absolutely no idea how to do that. If science and technology continue, indefinitely, to make progress in the sort of way they have for the last 100-300 years, then it seems inevitable humans will eventually invent AGI. Maybe thereās a chance itās unattainable for reasons most academics and researchers interested in the subject donāt currently anticipate.
For instance, one estimate is that to rival the computation of a single human brain, a computer would need to consume somewhere between 300 times and 300 billion times as much electricity as the entire United States does currently. If that estimate is accurate, and if building AGI requires that much computation and that much energy, then, at the very least, that makes AGI far less attainable than even some relatively pessimistic and cautious academics and researchers might have guessed. Imagine the amount of scientific and technological progress required to produce that much energy, or to perform that much computation commensurately more energy efficiently.
Letās assume, for the sake of argument, computation and energy are not issues, and itās just about solving the research problems. I just went on random.org and randomly generated a number between 1 and 275, to represent the range of years asked in this polling question. The result I got was 133 years. 133 years from now is 2158. So, can I do better than that? Can I guess a median year thatās more likely to accurate, or more likely to be closer, at least, than a random number generator? Do I have a better methodology than using random.org? Why should I think so? This is a fundamental question, and it underlies this whole polling exercise, as well of most if not all forecasting related to AGI. For instance, is there any scientific evidence or historical evidence that anyone has ever been able to predict when scientific research problems would be solved, or when fundamentally new technologies would be developed, with any sort of accuracy at all? If so, whereās the evidence? Letās cite it to motivate these exercises. If not, why should we think weāre in a different situation now where we are better able to tell how the future will unfold?
The mental picture I have of the long-term future when I think about forecasting when the fundamental science and technology problems pre-requisite to building AGI will be solved is of a thick fog, where I can see clearly only a little distance in front of me, can see foggily a little bit further, and then after that everything descends into completely opaque gray-white mist. Is 2158 the median year? I tried random.org again. I got 58, which would be 2083. Which year is more likely to be the median, 2083 or 2158? Are they equally likely to be the median? I have no idea how to answer these questions. For all I know, they might be fundamentally impossible to answer. The physicist David Deutsch makes the argument (e.g., in this video at 6:25) that we canāt predict the content of scientific knowledge we donāt yet know, since predicting the content would be equivalent to knowing it now, and we donāt know yet it. This makes sense to me.
We donāt yet know what the correct theory of intelligence is. We donāt know the content of that theory. The theory that human-like intelligence is just current-generation deep neural networks scaled up 1,000x times would imply a close median year for AGI. Other theories of intelligence would imply something else. If the specific micro-architecture of the whole human brain is whatās required for human-like intelligence (or general intelligence), then that implies AGI is probably quite far away, since we donāt yet know that micro-architecture and donāt yet have the tools in neuroscience to find it out. Even if we did know, reconstructing it in a simulation would pose its own set of scientific and technological challenges. Since we donāt know what the correct theory of intelligence is, we donāt know how hard it will be to build an intelligence like our own using computers, and therefore we canāt predict when it will happen.
My personal view that AGI is extremely unlikely (much less than 0.01% likely) before the end of 2035 comes from my beliefs that 1) human-like intelligence is definitely not current-gen deep neural networks scaled up 1,000x, 2) the correct theory of intelligence is not something nearly so simple or easy (e.g., if AGI could have been solved by symbolic AI, it probably would have been a long time ago), and 3) itās extremely unlikely that all the necessary scientific discoveries and technological breakthroughs required to solve everything from fundamental theory to practical implementation will be solved within the next ~9 years. Scientists, philosophers, and AI researchers have been trying to understand the fundamental nature of intelligence for a long time. The foundational research for deep learning goes back around 40 years, and it built on research thatās even older than that. Today, if you listen to ambitious AI researchers like Yann LeCun, Richard Sutton, FranƧois Chollet, and Jeff Hawkins, they are each confident in a research roadmap to AGI, but they are four completely different roadmaps based on completely different ideas. So, itās not like the science and philosophy of intelligence is converging toward any particular theory or solution.
Thatās a long, philosophical answer to this quick poll question, but I believe thatās the crux of the whole matter.
25 years seems about right to me, but with huge uncertainty.
First i have zero expertise here and am rubbish at prediction
I donāt think LLMs will get there, but something else probably will after that but maybe not in the very near future. I have a strong (perhaps too strong) feeling that the complexities of the human brain in forward planning/ā task stacking and truly creative thought might be further away than we think.
i also think thereās likely to be a warning shot and then the kind of political backlash that could even slow things down 10 years or so.
Slow things down 10 to how many years?
sorry edited
Iām using a combination of scenarios in the postāone or more of these happen significantly before AGI.
Help me make sure Iām understanding this right. Youāre at position #4 from left to right, so this means 2029 according to your list. So, this means you think thereās a 50% chance of a combination of the ācrazyā scenarios happening by 2029, right?
Unfortunately, the EA Forum polls software makes it hard to ask certain kinds of questions. Your prediction is listed as ā70% 2026ā, but thatās just an artifact of the poll software.
To make it clear to readers what people are actually predicting, and to make sure people giving predictions understand the system properly, you might want to add instructions for people to say something like ā50% chance the Year of Crazy happens by 2029ā at the top of their comments. That would at least save readers the trouble of cross-referencing the list for every single prediction.
I tried to do a poll on peopleās AI bubble predictions and I ran into a similar issue with the poll software displaying the results confusingly.
Yes, one or more of the ācrazyā things happening by 2029. Good suggestion: I have edited the post and my comments to include the year.
Though I think we could get explosive economic growth with AGI or even before, Iām going to interpret this as explosive physical growth, that we could double physical resources every year or less. I think that will take years after AGI to, e.g., crack robotics/āmolecular manufacturing.
Extrapolating the METR graph here <https://āāwww.lesswrong.com/āāposts/āā6KcP7tEe5hgvHbrSF/āāmetr-how-does-time-horizon-vary-across-domains> means soon for super-human coder, but I think itās going to take years after that for the tasks that are slower on that graph, and many tasks are not even on that graph (despite the speedup from having a superhuman coder).
For which event? Iām not seeing you on the poll above.
Ah⦠now I see you above and I realized I could mouse overāit is year of crazy. So you think the world will get crazy two years after AGI.