Metaculus Predicts Weak AGI in 2 Years and AGI in 10
Just a quick update on predicted timelines. Obviously, there’s no guarantee that Metaculus is 100% reliable + you should look at other sources as well, but I find this concerning.
Weak AGI is now predicted in a little over two years:
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
AGI predicted in about 10: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
Also, these are predicted dates until these systems publicly known, not the date until they exist. Things are getting crazy.
Even though Eliezer claims that there was no fire alarm for AGI, perhaps this is the fire alarm?
Might as well make an alternate prediction here:
There will be no AGI in the next 10 years. There will be an AI bubble over the next couple of years as new applications for deep learning proliferate, creating a massive hype cycle similar to the dot-com boom.
This bubble will die down or burst when people realize the limitations of deep learning in domains that lack gargantuan datasets. It will fail to take hold in domains where errors cause serious damage (see the unexpected difficulty of self-driving cars). Like with the burst of the dot-com bubble, people will continue to use AI a lot for the applications that it is actually good at.
If AGI does occur, it will be decades away at least, and require further conceptual breakthroughs and/or several orders of magnitude higher computing power.
I think, in hindsight, the Fire Alarm first started ringing in a DeepMind building in 2017. Or perhaps an OpenAI building in 2020. It’s certainly going off all over Microsoft now. It’s also going off in many other places. To some of us it is already deafening. A huge, ominous, distraction from our daily lives. I really want to do something to shut the damn thing off.
Can someone please explain why we’re still forecasting the weak AGI timeline? I thought “sparks” of AGI as Microsoft claimed GPT-4 achieved should already be more than the level of intelligence implied by “weak”.
The answer is that the question in question is not actually forecasting weak AGI, it’s forecasting these specific resolution criteria:
This isn’t personal, but I downvoted because I think Metaculus forecasts about this aren’t more reliable than chance, and people shouldn’t defer to them.
Curious what you mean by this. One version of chance is “uniform prediction of AGI over future years” which obviously seems worse than Metaculus, but perhaps you meant a more specific baseline?
Personally, I think forecasts like these are rough averages of what informed individuals would think about these questions. Yes, you shouldn’t defer to them, but it’s also useful to recognize how that community’s predictions have changed over time.
Hi Gabriel,
I am not sure how much to trust Metaculus’ in general, but I do not think it is obvious that their AI predictions should be ignored. For what is worth, Epoch attributed a weight of 0.23 to Metaculus in the judgement-based forecasts of their AI Timelines review. Holden, Ajeya and AI Impacts got smaller weights, whereas Samotsvety got a higher one:
However, one comment I made here may illustrate what Guy presumably is referring to:
Agree that they shouldn’t be ignored. By “you shouldn’t defer to them,” I just meant that it’s useful to also form one’s own inside view models alongside prediction markets (perhaps comparing to them afterwards).
What I mean is “these forecasts give no more information than flipping a coin to decide whether AGI would come in time period A vs. time period B”.
I have my own, rough, inside views about if and when AGI will come and what it would be able to do, and I don’t find it helpful to quantify them into a specific probability distribution. And there’s no “default distribution” here that I can think of either.
Gotcha, I think I still disagree with you for most decision-relevant time periods (e.g. I think they’re likely better than chance on estimating AGI within 10 years vs 20 years)
Remember that AGI is a pretty vague term by itself, and some people are forecasting on the specific definition under the Metaculus questions. This matters because those definitions don’t require anything inherently transformative like us being able to automate all labour, or scientific research. Rather they involve a bunch of technical benchmarks that aren’t that important on their own, which are being presumed to correlate with the transformative stuff we actually care about.
See also the recent Lex Fridman Twitter poll [H/T Max Ra]: