Those AI researcher forecasts are problematic—it just doesn’t make sense to put the forecasts for when AIs can do any task and when they can do any occupation so far apart. It suggests they’re not putting much thought into it/not thinking carefully. That is a principled reason to pay more attention to both skeptics and boosters who are actually putting in work to make their views clear, internally coherent, and convincing.
I agree that the enormous gap of 69 years between High-Level Machine Intelligence and Full Automation of Labour is weird and calls the whole thing into question. But I think all AGI forecasting should be called into question anyway. Who says human beings should be able to predict when a new technology will be invented? Who says human beings should be able to predict when the new science required to invent a new technology will be discovered? Why should we think forecasting AGI beyond anything more than a wild guess is possible?
I don’t see a lot of rigour, clarity, or consistency with any AGI forecasting. For example, Dario Amodei, the CEO of Anthropic, predicted in mid-March 2025 that by mid-September 2025, 90% of all code would be written by AI. When I brought this up on the EA Forum, the only response I got was just to deny that he ever made this prediction, when he clearly did, and even he doesn’t deny it, although I think he’s trying to spin it in a dishonest way. If when a prediction about progress toward AGI is falsified people’s response is to simply deny the prediction was made in the first place, despite it being on the public record and discussed well in advance, what hope is there for AGI forecasting? Anyone can just say anything they want at any time and there will be no scrutiny applied.
Another example that bothers me was when the economist Tyler Cowen said in April 2025 that he thinks o3 is AGI. Tyler Cowen isn’t nearly as central to the AGI debate as Dario Amodei, but he’s been on Dwarkesh Patel’s podcast to discuss AGI and he’s someone who is held in high regard by a lot of people who think seriously about the prospect of near-term AGI. I haven’t really seen anyone criticize Cowen’s claim that o3 is AGI, although I may simply have missed it. If you can just say that an AI system is AGI whenever you feel like it, then you can just say your prediction is correct when the time rolls around no matter what happens.
Edit: I don’t want to get lost in the sauce here, so I should add that I totally agree it’s way more interesting to listen to people who go through the trouble of thinking through their views clearly and who express them well. Just saying a number doesn’t feel particularly meaningful by contrast. I found this recent video by an academic AI researcher, Edan Meyer, wonderful in that respect:
The point of view he presents in this video seems very similar to what the Turing Award-winning pioneer of reinforcement learning Richard Sutton believes, but this video is by far the clearest and most succinct statement of that sort of reinforcement learning-influenced viewpoint I’ve seen so far. I find interviews with Sutton fascinating, but the way he talks is a bit more indirect and enigmatic.
I also find Yann LeCun to be a compelling speaker on this topic (another Turing Award winner, for his pioneering contributions to deep learning). I think many people who believe in near-term AGI from scaling LLMs have turned Yann LeCun into some sort of enemy image in their minds, probably because his style is confrontational and he speaks with confidence and force against their views. I often see people unfairly misrepresent and caricaturize what LeCun has to say when they should listen carefully, interpret generously, and engage with the substance (and do so respectfully). Just dismissing your most well-qualified critics out of hand is a great way to end up wrong and woefully overconfident.
I find Sutton and LeCun’s predictions about the timing of AGI and human-level kind of interesting, but that’s so much less interesting than what they have to say about the design principles of intelligence, which is fascinating and feels incredibly important. Their predictions on the timing are pretty much the least interesting part.
Those AI researcher forecasts are problematic—it just doesn’t make sense to put the forecasts for when AIs can do any task and when they can do any occupation so far apart. It suggests they’re not putting much thought into it/not thinking carefully. That is a principled reason to pay more attention to both skeptics and boosters who are actually putting in work to make their views clear, internally coherent, and convincing.
I agree that the enormous gap of 69 years between High-Level Machine Intelligence and Full Automation of Labour is weird and calls the whole thing into question. But I think all AGI forecasting should be called into question anyway. Who says human beings should be able to predict when a new technology will be invented? Who says human beings should be able to predict when the new science required to invent a new technology will be discovered? Why should we think forecasting AGI beyond anything more than a wild guess is possible?
I don’t see a lot of rigour, clarity, or consistency with any AGI forecasting. For example, Dario Amodei, the CEO of Anthropic, predicted in mid-March 2025 that by mid-September 2025, 90% of all code would be written by AI. When I brought this up on the EA Forum, the only response I got was just to deny that he ever made this prediction, when he clearly did, and even he doesn’t deny it, although I think he’s trying to spin it in a dishonest way. If when a prediction about progress toward AGI is falsified people’s response is to simply deny the prediction was made in the first place, despite it being on the public record and discussed well in advance, what hope is there for AGI forecasting? Anyone can just say anything they want at any time and there will be no scrutiny applied.
Another example that bothers me was when the economist Tyler Cowen said in April 2025 that he thinks o3 is AGI. Tyler Cowen isn’t nearly as central to the AGI debate as Dario Amodei, but he’s been on Dwarkesh Patel’s podcast to discuss AGI and he’s someone who is held in high regard by a lot of people who think seriously about the prospect of near-term AGI. I haven’t really seen anyone criticize Cowen’s claim that o3 is AGI, although I may simply have missed it. If you can just say that an AI system is AGI whenever you feel like it, then you can just say your prediction is correct when the time rolls around no matter what happens.
Edit: I don’t want to get lost in the sauce here, so I should add that I totally agree it’s way more interesting to listen to people who go through the trouble of thinking through their views clearly and who express them well. Just saying a number doesn’t feel particularly meaningful by contrast. I found this recent video by an academic AI researcher, Edan Meyer, wonderful in that respect:
The point of view he presents in this video seems very similar to what the Turing Award-winning pioneer of reinforcement learning Richard Sutton believes, but this video is by far the clearest and most succinct statement of that sort of reinforcement learning-influenced viewpoint I’ve seen so far. I find interviews with Sutton fascinating, but the way he talks is a bit more indirect and enigmatic.
I also find Yann LeCun to be a compelling speaker on this topic (another Turing Award winner, for his pioneering contributions to deep learning). I think many people who believe in near-term AGI from scaling LLMs have turned Yann LeCun into some sort of enemy image in their minds, probably because his style is confrontational and he speaks with confidence and force against their views. I often see people unfairly misrepresent and caricaturize what LeCun has to say when they should listen carefully, interpret generously, and engage with the substance (and do so respectfully). Just dismissing your most well-qualified critics out of hand is a great way to end up wrong and woefully overconfident.
I find Sutton and LeCun’s predictions about the timing of AGI and human-level kind of interesting, but that’s so much less interesting than what they have to say about the design principles of intelligence, which is fascinating and feels incredibly important. Their predictions on the timing are pretty much the least interesting part.
Zvi has criticized both Amodei’s and Cowen’s claims.