Iāll be quite interested to learn more about Jeffreyās project once itās further along. I might reach out to you or Jeffrey in a few weeks about that.
Regarding the forecasts, we can have any time range and any topic. I already have a bunch of ideas, but just wanted to see if anything bubbled to mind for you independently which I could add to my list. (Itās ok if not!)
I recall when I read Superforecasting a few years ago that forecasts arenāt particularly reliable beyond a few years even for Superforecasters (though correct me if Iām wrong/āmaybe views on that are different now than they were then?).
I guess this might depend what you mean by āparticularly reliableā.
My understanding is that thereās basically just no good evidence either way regarding how accurate and calibrated forecasts are over long time-scales (at least if we restrict ourselves to relevant kinds of forecasts, e.g. ones made by people who seem to have been genuinely trying rather than just making claims for rhetorical/āpolitical effect). But thereās a little evidence (from Tetlock) to suggest that accuracy may decline relatively slowly after the first year or so. See in particular the great post How Feasible Is Long-range Forecasting?, footnote 17 there, and posts tagged long-range forecasting. Hereās the summary of that post:
How accurate do long-range (ā„10yr) forecasts tend to be, and how much should we rely on them?
As an initial exploration of this question, I sought to study the track record of long-range forecasting exercises from the past. Unfortunately, my key finding so far is that it is difficult to learn much of value from those exercises, for the following reasons:
Long-range forecasts are often stated too imprecisely to be judged for accuracy. [More]
Even if a forecast is stated precisely, it might be difficult to find the information needed to check the forecast for accuracy. [More]
Degrees of confidence for long-range forecasts are rarely quantified. [More]
In most cases, no comparison to a ābaseline methodā or ānull modelā is possible, which makes it difficult to assess how easy or difficult the original forecasts were. [More]
Incentives for forecaster accuracy are usually unclear or weak. [More]
Very few studies have been designed so as to allow confident inference about which factors contributed to forecasting accuracy. [More]
Itās difficult to know how comparable past forecasting exercises are to the forecasting we [at Open Phil] do for grantmaking purposes, e.g. because the forecasts we make are of a different type, and because the forecasting training and methods we use are different. [More]
We plan to continue to make long-range quantified forecasts about our work so that, in the long run, we might learn something about the feasibility of long-range forecasting, at least for our own case. [More]
Thanks for this response!
Iāll be quite interested to learn more about Jeffreyās project once itās further along. I might reach out to you or Jeffrey in a few weeks about that.
Regarding the forecasts, we can have any time range and any topic. I already have a bunch of ideas, but just wanted to see if anything bubbled to mind for you independently which I could add to my list. (Itās ok if not!)
I guess this might depend what you mean by āparticularly reliableā.
My understanding is that thereās basically just no good evidence either way regarding how accurate and calibrated forecasts are over long time-scales (at least if we restrict ourselves to relevant kinds of forecasts, e.g. ones made by people who seem to have been genuinely trying rather than just making claims for rhetorical/āpolitical effect). But thereās a little evidence (from Tetlock) to suggest that accuracy may decline relatively slowly after the first year or so. See in particular the great post How Feasible Is Long-range Forecasting?, footnote 17 there, and posts tagged long-range forecasting. Hereās the summary of that post: