In the report, it says: “A natural question is whether more accurate near-term forecasters made systematically different long-term risk predictions. Figure 4.1 suggests that there is no meaningful relationship between near-term accuracy and long-term risk forecasts.”
It then says: “Overall, our findings challenge the hope that near-term accuracy can reliably identify forecasters with more credible long-term risk predictions.”
One interpretation here (that I take this report to be offering) is that short term prediction accuracy don’t extrapolate to long term prediction accuracy in general. However, another interpretation that I see as reasonable (maybe somewhat but not substantially less) is merely that Superforecasters aren’t very good at predicting things that require lots of technical information (i.e. AI capabilities); after all (from my knowledge), very little work has been done to show that superforecasters are actually as good at predictions in technical subjects (almost all of the initial work was done in economics and geopolitics), and maybe there are some object level reasons to think that they wouldn’t(?)
Would be interested in hearing more thoughts/to be corrected if wrong here.
Also: “This research would not have been possible without the support of the Musk Foundation, Open Philanthropy, and the Long-Term Future Fund.” Musk Foundation, huh? Interesting.
In the report, it says: “A natural question is whether more accurate near-term forecasters made systematically different long-term risk predictions. Figure 4.1 suggests that there is no meaningful relationship between near-term accuracy and long-term risk forecasts.”
It then says: “Overall, our findings challenge the hope that near-term accuracy can reliably identify forecasters with more credible long-term risk predictions.”
One interpretation here (that I take this report to be offering) is that short term prediction accuracy don’t extrapolate to long term prediction accuracy in general. However, another interpretation that I see as reasonable (maybe somewhat but not substantially less) is merely that Superforecasters aren’t very good at predicting things that require lots of technical information (i.e. AI capabilities); after all (from my knowledge), very little work has been done to show that superforecasters are actually as good at predictions in technical subjects (almost all of the initial work was done in economics and geopolitics), and maybe there are some object level reasons to think that they wouldn’t(?)
Would be interested in hearing more thoughts/to be corrected if wrong here.
Also: “This research would not have been possible without the support of the Musk Foundation, Open Philanthropy, and the Long-Term Future Fund.” Musk Foundation, huh? Interesting.