Yeah, I mean, I guess one way to try to quantify this is when you expect, I don’t know, we often talk about big acceleration, economic growth. One way to quantify is when do you expect, maybe US GDP growth, maybe global GDP growth to be faster than 5% per year for a couple of years in a row. Maybe that’s one way to think about it. And then you can think about what is your median timeline until that happens. I think if you think about like that, I would maybe say more than 30 years or something. Maybe a bit less than 40 years by this point. So 35. Yeah. And I’m not sure, but I think you [Matthew Barnett] might say like 15 or 20 years.
Relatedly, the median expert in 2023 thought the median date of full automation to be 2073.
I remain open to betting up to 10 k$ against short AI timelines. I understand this does not work for people who think doom or utopia are certain soon after AGI, but I would say this is a super extreme view. It also reminds me of religious unbettable or unfalsiable views. Banks may offer loans with better conditions, but, as long as my bet is beneficial, one should take the bank loans until they are marginally neutral, and then also take my bet.
I think Ege is one of the best proponents of longer timelines, and link to that episode in the article.
I don’t put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher. A lot of their forecasts also seem inconsistent. A bit more discussion here: https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
Financially, I’m already heavily exposed to short AI timelines via my investments.
I don’t put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher.
Then what was the point of quoting Sam Altman, Dario Amodei, and Demis Hassabis’ timelines at the beginning of your article?
The section of the post “When do the ‘experts’ expect AGI to arrive?” suffers from a similar problem: downplaying expert opinion when it challenges the thesis and playing up expert opinion when it supports the thesis. What is the content and structure of this argument? It just feels like a restatement of your personal opinion.
I also wish people would stop citing Metaculus for anything. Metaculus is not a real prediction market. You can’t make money on Metaculus. You might as well just survey people on r/singularity.
Thanks for sharing, Ben! Lots of interesting resources.
I liked Epoch After Hours’ podcast episode Is it 3 Years, or 3 Decades Away? Disagreements on AGI Timelines by Ege Erdil and Matthew Barnett (linkpost). Ege has much longer timelines than the ones you seem to endorse (see text I bolded below), and is well informed. He is the 1st author of the paper about Epoch AI’s compute-centric model of AI automation which was announced on 21 March 2025.
Relatedly, the median expert in 2023 thought the median date of full automation to be 2073.
I remain open to betting up to 10 k$ against short AI timelines. I understand this does not work for people who think doom or utopia are certain soon after AGI, but I would say this is a super extreme view. It also reminds me of religious unbettable or unfalsiable views. Banks may offer loans with better conditions, but, as long as my bet is beneficial, one should take the bank loans until they are marginally neutral, and then also take my bet.
I think Ege is one of the best proponents of longer timelines, and link to that episode in the article.
I don’t put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher. A lot of their forecasts also seem inconsistent. A bit more discussion here: https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
Financially, I’m already heavily exposed to short AI timelines via my investments.
Then what was the point of quoting Sam Altman, Dario Amodei, and Demis Hassabis’ timelines at the beginning of your article?
The section of the post “When do the ‘experts’ expect AGI to arrive?” suffers from a similar problem: downplaying expert opinion when it challenges the thesis and playing up expert opinion when it supports the thesis. What is the content and structure of this argument? It just feels like a restatement of your personal opinion.
I also wish people would stop citing Metaculus for anything. Metaculus is not a real prediction market. You can’t make money on Metaculus. You might as well just survey people on r/singularity.