6 Year Decrease of Metaculus AGI Prediction
Metaculus now predicts that the first AGI[1] will become publicly known in 2036. This is a massive update − 6 years faster than previous estimates. I expect this update is based on recent papers[2]. It suggests that it is important to be prepared for short timelines, such as by accelerating alignment efforts in so far as this is possible.
- ^
Some people may feel that the criteria listed aren’t quite what is typically meant by AGI and they have a point. At the same time, I expect this is the result of some objective criteria being needed for this kinds of competitions. In any case, if there was an AI that achieved this bar, then the implications of this would surely be immense.
- ^
Be careful interpreting this question. Its resolution criteria consist of one unified system solving four tasks that are already almost solved, but that would not be sufficient for automating all human work.
Yes, this is the criteria below. I don’t know much about AI, but this seems different than what many people would consider to be AGI.
I note that it was actually at it’s low point, of 2032, in Aug 2020; then went up as high as 2047 by Oct 2021. What is the cause of the volatility? Was the increase due to no GPT-4 announcement? (And then the decrease due to PaLM being seen as the equivalent of GPT-4?)
I note that in November 2020, the Metaculus community’s prediction was that AGI would be arriving even sooner (2032, versus the current 2036 prediction). So if we’re taking the Metaculus prediction seriously, we also want to understand things why the forecasters on Metaculus have longer timelines now than they did a year and a half ago.
I note that 60 extra forecasters joined in forecasting over the last few days, representing about a 20% increase in the forecaster population for this question.
This makes me hypothesize that the recent drop in forecasted timeline is due to a flood of attention on this question due to hype from the papers and the associated panic on LW and signal-boosting from SlateStarCodex. Perhaps the perspectives of those forecasters represent a thoughtful update in response to those publications. Or perhaps it represents panic and following the crowd. Since this is a long-term forecast, with no financial incentives, on a charged question with in-group signaling relevance, I frankly just don’t know what to think.
It would be interesting if it were possible to disambiguate:
1. Previous forecasters moved up their forecasts to shorter timelines
vs.
2. New forecasters, who have shorter timelines, offered forecasts for the question when they hadn’t forecasted previously
Both are informative, and in a real-money prediction market both are equally informative. But with a forecasting platform, this could “just” be a composition bias?
One crude metric: the number of forecasters has gone up 25% in the last month, from n=284 to n=354