I think Ege is one of the best proponents of longer timelines, and link to that episode in the article.
I don’t put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher. A lot of their forecasts also seem inconsistent. A bit more discussion here: https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
Financially, I’m already heavily exposed to short AI timelines via my investments.
Thank you!
This post just points out that the AI 2027 article is an attempt to flesh out a particular scenario, rather than an argument for short timelines, which the authors of AI 2027 would agree with.
Yes, I explicitly wanted to point out that AI can be useful to science beyond LLMs.
I agree it’s not having flashes of insight, but I also think people under-estimate how useful brute force problem solving could be. I expect AI to become useful to science well before it has ‘novel insights’ in the way we imagine genius humans to have them.
I do say it increased the productivity of ‘top’ researchers, and it’s also clarified through the link. (To my mind, it makes it more impressive, since it was adding value even to the best researchers.)
20% more prototypes and 40% new patents sounds pretty meaningful.
I was just trying to illustrate that AI is already starting to contribute to scientific productivity in the near-term.
Productivity won’t continually increase until something more like a fully automated-scientist is created (which we clearly don’t already have).
I’m not sure I follow. No-one is claiming that AI can already do these things – the claim is that if progress continues, then you could reach a point where AI accelerates AI research, and from there you get to something like ASI, and from there space colonisation. To argue against that you need to show the rate of progress is insufficient to get there.