The XPT forecast about compute in 2030 still boggles my mind. I’m genuinely confused what happened there. Is anybody reading this familiar with the answer?
FWIW you can see more information, including some of the reasoning, on page 655 (# written on pdf) / 659 (# according to page searcher) of the report. (H/t Isabel.) See also page 214 for the definition of the question.
Some tidbits:
Experts started out much higher than superforecasters, but updated downwards after discussion. Superforecasters updated a bit upward, but less:
(Those are billions on the y-axis.)
This was surprising to me. I think the experts’ predictions look too low even before updating, and look much worse after updating!
The part of the report that talks about “arguments given for lower forecasts”. (The footnotes contain quotes from people expressing those views.)
Arguments given for lower forecasts (2024: <$40m, 2030: <$110m, 2050: ⩽$200m)
● Training costs have been stable around $10m for the last few years.1326
● Current trend increases are not sustainable for many more years.1327 One team cited this AI Impacts blog post.
● Major companies are cutting costs.1328
● Increases in model size and complexity will be offset by a combination of falling compute costs, pre-training, and algorithmic improvements.1329
● Large language models will probably see most attention in the near future, and these are bottlenecked by availability of data, which will lead to smaller models and less compute.1330
● Not all experiments will be public, and it is possible that the most expensive experiments will not be public.1331
(This last bullet point seems irrelevant to me. The question doesn’t specify that the experiments has to be public, and “In the absence of an authoritative source, the question will be resolved by a panel of experts.”)
I think this is evidence for a groupthink phenomenon amongst superforecasters. Interestingly my other experiences talking with superforecasters have also made me update in this direction (they seemed much more groupthinky than I expected, as if they were deferring to each other a lot. Which, come to think of it, makes perfect sense—I imagine if I were participating in forecasting tournaments, I’d gradually learn to reflexively defer to superforecasters too, since they genuinely would be performing well.)
The XPT forecast about compute in 2030 still boggles my mind. I’m genuinely confused what happened there. Is anybody reading this familiar with the answer?
Reminds me of this:
A kind of conservativeness of “expert” opinion that doesn’t correctly appreciate (rapid) exponential growth.
I think it’s also just very difficult for experts to adopt a new paradigm. In transportation the experts consistently overestimate.
Can you share a link to the source of this chart? The current link shows me a jpg and nothing else.
Source is here. (I’ve not read the article—I’ve seen the chart (or variations of it) a bunch of times before and just googled for the image.)
FWIW you can see more information, including some of the reasoning, on page 655 (# written on pdf) / 659 (# according to page searcher) of the report. (H/t Isabel.) See also page 214 for the definition of the question.
Some tidbits:
Experts started out much higher than superforecasters, but updated downwards after discussion. Superforecasters updated a bit upward, but less:
(Those are billions on the y-axis.)
This was surprising to me. I think the experts’ predictions look too low even before updating, and look much worse after updating!
The part of the report that talks about “arguments given for lower forecasts”. (The footnotes contain quotes from people expressing those views.)
(This last bullet point seems irrelevant to me. The question doesn’t specify that the experiments has to be public, and “In the absence of an authoritative source, the question will be resolved by a panel of experts.”)
Thanks!
I think this is evidence for a groupthink phenomenon amongst superforecasters. Interestingly my other experiences talking with superforecasters have also made me update in this direction (they seemed much more groupthinky than I expected, as if they were deferring to each other a lot. Which, come to think of it, makes perfect sense—I imagine if I were participating in forecasting tournaments, I’d gradually learn to reflexively defer to superforecasters too, since they genuinely would be performing well.)