When you survey AI experts or superforecasters about AGI, you tend to get dramatically more conservative opinions. One survey of AI experts found that the median expert assigns only a 50% probability to AI automating all human jobs by 2116. A survey of superforecasters found the median superforecaster assigns a 50% probability to AGI being developed by 2081.
Dario Amodei also said on Dwarkesh Patelâs podcast 1 year and 8 months ago that we would have something that sure sounded a lot like AGI in 2-3 years and now, 1 year and 8 months later, it seems like heâs pushed back that timeline to 2-3 years from now. This is suspicious.
If you look at the history of Tesla and fully autonomous driving, there is an absurd situation in which Elon Musk has said pretty much every year from 2015 to 2025 that full autonomy is 1 year away or it will be solved ânext yearâ or by the end of the current year. Based on this, I have a strong suspicion of tech CEOs updating their predictions so that an AI breakthrough is always the same amount of time away from the present moment, even as time progresses.
I discuss expert views here. I donât put much weight on the superforecaster estimates you mention at this point because they were made in 2022, before the dramatic shortening in timelines due to chatGPT (let alone reasoning models).
They also (i) made compute forecasts that were very wrong (ii) donât seem to know that much about AI (iii) were selected for expertise in forecasting near-term political events, which might not generalise very well to longer-term forecasting of a new technology.
I agree we should consider the forecast, but I think itâs ultimately pretty weak evidence.
The AI experts survey also found a 25% chance of AI that âcan do all tasks better than a humanâ by 2032. I donât know why they think itâll take so much longer to âautomate all jobsâ â it seems likely theyâre just not thinking about it very carefully (especially since they estimate ~50% of an intelligence explosion starting after AI can do âall tasksâ); or it could be because they think there will be a bunch of jobs where people have a strong preference for a human to be in them (e.g. priest, artist), even if AI is technically better at everything.
The AI experts have also been generally too pessimistic and e.g. in 2023 predicted that AI couldnât do simple Python programming until 2025, though it could probably already do that at the time. I expect their answers in the next survey will be shorter again. And theyâre also not experts in forecasting.
I donât think this is an accurate summary of Darioâs stated views. Hereâs what he said in 2023 on the Dwarkesh podcast:
Dwarkesh Patel (00:27:49 â 00:27:56):
When you add all this together, what does your estimate of when we get something kind of human level look like?
Dario Amodei (00:27:56 â 00:29:32):
It depends on the thresholds. In terms of someone looks at the model and even if you talk to it for an hour or so, itâs basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years.
Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stageâa âcountry of geniuses in a datacenterââwith the profound economic, societal, and security implications that would bring.
These are different ideas, so I think it would be reasonable to have different timelines for âif you talk to it for an hour or so, itâs basically like a generally well educated humanâ and âa country of geniuses in a datacenter.â Nevertheless thereâs substantial overlap with these timelines, 18 months apart; he also uses language that signals some uncertainty at both points. I donât think this is particularly suspicious â it seems pretty consistent to me.
I agree those two statements donât obviously seem inconsistent, though independently it seems to me Dario probably has been too optimistic historically.
To me, âa generally well-educated humanâ and a âhighly intelligentâ person sounds like more or less the same thing. If, in his mind, there is some clear-cut difference between the two, I havenât seen him explain this difference anywhere (and Iâve listened to and read a lot of his words).
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
When you survey AI experts or superforecasters about AGI, you tend to get dramatically more conservative opinions. One survey of AI experts found that the median expert assigns only a 50% probability to AI automating all human jobs by 2116. A survey of superforecasters found the median superforecaster assigns a 50% probability to AGI being developed by 2081.
Dario Amodei also said on Dwarkesh Patelâs podcast 1 year and 8 months ago that we would have something that sure sounded a lot like AGI in 2-3 years and now, 1 year and 8 months later, it seems like heâs pushed back that timeline to 2-3 years from now. This is suspicious.
If you look at the history of Tesla and fully autonomous driving, there is an absurd situation in which Elon Musk has said pretty much every year from 2015 to 2025 that full autonomy is 1 year away or it will be solved ânext yearâ or by the end of the current year. Based on this, I have a strong suspicion of tech CEOs updating their predictions so that an AI breakthrough is always the same amount of time away from the present moment, even as time progresses.
I discuss expert views here. I donât put much weight on the superforecaster estimates you mention at this point because they were made in 2022, before the dramatic shortening in timelines due to chatGPT (let alone reasoning models).
They also (i) made compute forecasts that were very wrong (ii) donât seem to know that much about AI (iii) were selected for expertise in forecasting near-term political events, which might not generalise very well to longer-term forecasting of a new technology.
I agree we should consider the forecast, but I think itâs ultimately pretty weak evidence.
The AI experts survey also found a 25% chance of AI that âcan do all tasks better than a humanâ by 2032. I donât know why they think itâll take so much longer to âautomate all jobsâ â it seems likely theyâre just not thinking about it very carefully (especially since they estimate ~50% of an intelligence explosion starting after AI can do âall tasksâ); or it could be because they think there will be a bunch of jobs where people have a strong preference for a human to be in them (e.g. priest, artist), even if AI is technically better at everything.
The AI experts have also been generally too pessimistic and e.g. in 2023 predicted that AI couldnât do simple Python programming until 2025, though it could probably already do that at the time. I expect their answers in the next survey will be shorter again. And theyâre also not experts in forecasting.
I donât think this is an accurate summary of Darioâs stated views. Hereâs what he said in 2023 on the Dwarkesh podcast:
Hereâs what he said in a statement in February:
These are different ideas, so I think it would be reasonable to have different timelines for âif you talk to it for an hour or so, itâs basically like a generally well educated humanâ and âa country of geniuses in a datacenter.â Nevertheless thereâs substantial overlap with these timelines, 18 months apart; he also uses language that signals some uncertainty at both points. I donât think this is particularly suspicious â it seems pretty consistent to me.
I agree those two statements donât obviously seem inconsistent, though independently it seems to me Dario probably has been too optimistic historically.
To me, âa generally well-educated humanâ and a âhighly intelligentâ person sounds like more or less the same thing. If, in his mind, there is some clear-cut difference between the two, I havenât seen him explain this difference anywhere (and Iâve listened to and read a lot of his words).
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.