I vaguely recall hearing something like ‘the skill of developing the right questions to pose in forecasting tournaments is more important than the skill of making accurate forecasts on those questions.’ What are your thoughts on this and the value of developing questions to pose to forecasters?
Yeah I think Tara Kirk Sell mentioned this on the 80k podcast. I think I mostly agree, with the minor technical caveat that if you were trying to get people to forecast numerical questions, getting the ranges exactly right matters more when you have buckets (like in the JHU Disease Prediction Project that Tara ran, and Good Judgement 2.0), but asking people to forecast a distribution (like in Metaculus) allows the question asker to be more agnostic about ranges. Though the specific thing I would agree with is something like:
at current margins, getting useful forecasts out is more bottlenecked by skill in question operationalization than by judgemental forecasting skill.
I think other elements of the forecasting pipeline plausibly matter even more, which I talked about in my answer to JP’s question.
“The right question” has 2 components. First is that the thing you’re asking about is related to what you actually want to know, and second is that it’s a clear and unambiguously resolvable target. These are often in tension with each other.
One clear example is COVID-19 cases—you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made—not to mention relying on data that might not be gathered as of when you want to resolve the question.
I vaguely recall hearing something like ‘the skill of developing the right questions to pose in forecasting tournaments is more important than the skill of making accurate forecasts on those questions.’ What are your thoughts on this and the value of developing questions to pose to forecasters?
Yeah I think Tara Kirk Sell mentioned this on the 80k podcast. I think I mostly agree, with the minor technical caveat that if you were trying to get people to forecast numerical questions, getting the ranges exactly right matters more when you have buckets (like in the JHU Disease Prediction Project that Tara ran, and Good Judgement 2.0), but asking people to forecast a distribution (like in Metaculus) allows the question asker to be more agnostic about ranges. Though the specific thing I would agree with is something like:
I think other elements of the forecasting pipeline plausibly matter even more, which I talked about in my answer to JP’s question.
“The right question” has 2 components. First is that the thing you’re asking about is related to what you actually want to know, and second is that it’s a clear and unambiguously resolvable target. These are often in tension with each other.
One clear example is COVID-19 cases—you probably care about total cases much more than confirmed cases, but confirmed cases are much easier to use for a resolution criteria. You can make more complex questions to try to deal with this, but that makes them harder to forecast. Forecasting excess deaths, for example, gets into whether people are more or less likely to die in a car accident during COVID-19, and whether COVID reduction measures also blunt the spread of influenza. And forecasting retrospective population percentages that are antibody positive runs into issues with sampling, test accuracy, and the timeline for when such estimates are made—not to mention relying on data that might not be gathered as of when you want to resolve the question.