I think there are good arguments here. I think though starting with the opinions of CEOs of companies though is a weak opening. These guys are just outrageously motivated as company heads to state timelines as short as possible.
Add to that increasing race dynamics with companies vying for investment and from a business perspective they almost “have” to say these things for the good of their investors. It’s their job to overhype.
We don’t listen very seriously to Crypto CEOs on the trajectory of Crypto, or Tobacco CEOs on the safety of tobacco, or Elon Musk on the trajectory of Tesla and Space X (usually hugely overoptimistic). Why then do we listen to AI company CEOs about AGI timelines? Perhaps we are lulled by their intelligence and ability but that can never outweigh the sheer weight of incentives and even the very nature of their position.
There are plenty of forecasters and AI experts outside of companies that we can look to for this, I think we should perhaps consider the statements from AI company CEOs a weak data point.
When you survey AI experts or superforecasters about AGI, you tend to get dramatically more conservative opinions. One survey of AI experts found that the median expert assigns only a 50% probability to AI automating all human jobs by 2116. A survey of superforecasters found the median superforecaster assigns a 50% probability to AGI being developed by 2081.
Dario Amodei also said on Dwarkesh Patel’s podcast 1 year and 8 months ago that we would have something that sure sounded a lot like AGI in 2-3 years and now, 1 year and 8 months later, it seems like he’s pushed back that timeline to 2-3 years from now. This is suspicious.
If you look at the history of Tesla and fully autonomous driving, there is an absurd situation in which Elon Musk has said pretty much every year from 2015 to 2025 that full autonomy is 1 year away or it will be solved “next year” or by the end of the current year. Based on this, I have a strong suspicion of tech CEOs updating their predictions so that an AI breakthrough is always the same amount of time away from the present moment, even as time progresses.
I discuss expert views here. I don’t put much weight on the superforecaster estimates you mention at this point because they were made in 2022, before the dramatic shortening in timelines due to chatGPT (let alone reasoning models).
They also (i) made compute forecasts that were very wrong (ii) don’t seem to know that much about AI (iii) were selected for expertise in forecasting near-term political events, which might not generalise very well to longer-term forecasting of a new technology.
I agree we should consider the forecast, but I think it’s ultimately pretty weak evidence.
The AI experts survey also found a 25% chance of AI that “can do all tasks better than a human” by 2032. I don’t know why they think it’ll take so much longer to “automate all jobs” – it seems likely they’re just not thinking about it very carefully (especially since they estimate ~50% of an intelligence explosion starting after AI can do “all tasks”); or it could be because they think there will be a bunch of jobs where people have a strong preference for a human to be in them (e.g. priest, artist), even if AI is technically better at everything.
The AI experts have also been generally too pessimistic and e.g. in 2023 predicted that AI couldn’t do simple Python programming until 2025, though it could probably already do that at the time. I expect their answers in the next survey will be shorter again. And they’re also not experts in forecasting.
I don’t think this is an accurate summary of Dario’s stated views. Here’s what he said in 2023 on the Dwarkesh podcast:
Dwarkesh Patel (00:27:49 − 00:27:56):
When you add all this together, what does your estimate of when we get something kind of human level look like?
Dario Amodei (00:27:56 − 00:29:32):
It depends on the thresholds. In terms of someone looks at the model and even if you talk to it for an hour or so, it’s basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years.
Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.
These are different ideas, so I think it would be reasonable to have different timelines for “if you talk to it for an hour or so, it’s basically like a generally well educated human” and “a country of geniuses in a datacenter.” Nevertheless there’s substantial overlap with these timelines, 18 months apart; he also uses language that signals some uncertainty at both points. I don’t think this is particularly suspicious — it seems pretty consistent to me.
I agree those two statements don’t obviously seem inconsistent, though independently it seems to me Dario probably has been too optimistic historically.
To me, “a generally well-educated human” and a “highly intelligent” person sounds like more or less the same thing. If, in his mind, there is some clear-cut difference between the two, I haven’t seen him explain this difference anywhere (and I’ve listened to and read a lot of his words).
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
Here we’re also talking about capabilities rather than harm. If you want to find out how fast cars will be in 5 years, asking the auto industry seems like a reasonable move.
Is it? Wouldn’t you expect the auto industry to have incentives to exaggerate their possible future accomplishments in developing faster cars because it has a direct influence on how much governments will prioritise it as a means of transport, subsidise R&D, etc.?
I wouldn’t totally defer to them, but I wouldn’t totally ignore them either. (And this is mostly besides the point since I’m overall I’m critical of using their forecasts and my argument doesn’t rest on this.)
I wouldn’t consider car company CEOs a serious data point here for the same reasons. I agree it seems a reasonable move but I don’t think it actually is.
Asking workers and technicians within companies, especially off the record though I would consider a useful data point, although still biased of course.
I would have thought there might even be data on the accuracy of industry head predictions, because there would be a lot of news sources to look back on which could now be checked for accuracy. Might have a look.
I think there are good arguments here. I think though starting with the opinions of CEOs of companies though is a weak opening. These guys are just outrageously motivated as company heads to state timelines as short as possible.
Add to that increasing race dynamics with companies vying for investment and from a business perspective they almost “have” to say these things for the good of their investors. It’s their job to overhype.
We don’t listen very seriously to Crypto CEOs on the trajectory of Crypto, or Tobacco CEOs on the safety of tobacco, or Elon Musk on the trajectory of Tesla and Space X (usually hugely overoptimistic). Why then do we listen to AI company CEOs about AGI timelines? Perhaps we are lulled by their intelligence and ability but that can never outweigh the sheer weight of incentives and even the very nature of their position.
There are plenty of forecasters and AI experts outside of companies that we can look to for this, I think we should perhaps consider the statements from AI company CEOs a weak data point.
When you survey AI experts or superforecasters about AGI, you tend to get dramatically more conservative opinions. One survey of AI experts found that the median expert assigns only a 50% probability to AI automating all human jobs by 2116. A survey of superforecasters found the median superforecaster assigns a 50% probability to AGI being developed by 2081.
Dario Amodei also said on Dwarkesh Patel’s podcast 1 year and 8 months ago that we would have something that sure sounded a lot like AGI in 2-3 years and now, 1 year and 8 months later, it seems like he’s pushed back that timeline to 2-3 years from now. This is suspicious.
If you look at the history of Tesla and fully autonomous driving, there is an absurd situation in which Elon Musk has said pretty much every year from 2015 to 2025 that full autonomy is 1 year away or it will be solved “next year” or by the end of the current year. Based on this, I have a strong suspicion of tech CEOs updating their predictions so that an AI breakthrough is always the same amount of time away from the present moment, even as time progresses.
I discuss expert views here. I don’t put much weight on the superforecaster estimates you mention at this point because they were made in 2022, before the dramatic shortening in timelines due to chatGPT (let alone reasoning models).
They also (i) made compute forecasts that were very wrong (ii) don’t seem to know that much about AI (iii) were selected for expertise in forecasting near-term political events, which might not generalise very well to longer-term forecasting of a new technology.
I agree we should consider the forecast, but I think it’s ultimately pretty weak evidence.
The AI experts survey also found a 25% chance of AI that “can do all tasks better than a human” by 2032. I don’t know why they think it’ll take so much longer to “automate all jobs” – it seems likely they’re just not thinking about it very carefully (especially since they estimate ~50% of an intelligence explosion starting after AI can do “all tasks”); or it could be because they think there will be a bunch of jobs where people have a strong preference for a human to be in them (e.g. priest, artist), even if AI is technically better at everything.
The AI experts have also been generally too pessimistic and e.g. in 2023 predicted that AI couldn’t do simple Python programming until 2025, though it could probably already do that at the time. I expect their answers in the next survey will be shorter again. And they’re also not experts in forecasting.
I don’t think this is an accurate summary of Dario’s stated views. Here’s what he said in 2023 on the Dwarkesh podcast:
Here’s what he said in a statement in February:
These are different ideas, so I think it would be reasonable to have different timelines for “if you talk to it for an hour or so, it’s basically like a generally well educated human” and “a country of geniuses in a datacenter.” Nevertheless there’s substantial overlap with these timelines, 18 months apart; he also uses language that signals some uncertainty at both points. I don’t think this is particularly suspicious — it seems pretty consistent to me.
I agree those two statements don’t obviously seem inconsistent, though independently it seems to me Dario probably has been too optimistic historically.
To me, “a generally well-educated human” and a “highly intelligent” person sounds like more or less the same thing. If, in his mind, there is some clear-cut difference between the two, I haven’t seen him explain this difference anywhere (and I’ve listened to and read a lot of his words).
They seem quite different to me: one is about AIs being able to talk like a smart human, and the other is about their ability to actually do novel scientific research and other serious intellectual tasks.
We don’t listen to … Tobacco CEOs on the safety of tobacco
Tbf we might have done if Tobacco CEOs had said that tobacco products were very harmful.
That’s true. If CEOs doubled down on things that were against the interest of their company I would listen to them intently
Here we’re also talking about capabilities rather than harm. If you want to find out how fast cars will be in 5 years, asking the auto industry seems like a reasonable move.
Is it? Wouldn’t you expect the auto industry to have incentives to exaggerate their possible future accomplishments in developing faster cars because it has a direct influence on how much governments will prioritise it as a means of transport, subsidise R&D, etc.?
I wouldn’t totally defer to them, but I wouldn’t totally ignore them either. (And this is mostly besides the point since I’m overall I’m critical of using their forecasts and my argument doesn’t rest on this.)
I wouldn’t consider car company CEOs a serious data point here for the same reasons. I agree it seems a reasonable move but I don’t think it actually is.
Asking workers and technicians within companies, especially off the record though I would consider a useful data point, although still biased of course.
I would have thought there might even be data on the accuracy of industry head predictions, because there would be a lot of news sources to look back on which could now be checked for accuracy. Might have a look.