Yeah, I do like your four examples of “just the numbers” forecasts that are valuable: weather, elections, what people believe, and “where is there lots of disagreement? I’m more skeptical that these are useful, rather than curiosity-satisfying.
Election forecasts are a case in point. People will usually prepare for all outcomes regardless of the odds. And if you work in politics, deciding who to choose for VP or where to spend your marginal ad dollar, you need models of voter behavior.
Probably the best case for just-the-numbers is probably your point (b), shift-detection. I echo your point that many people seem struck by the shift in AGI risk on the Metaculus question.
I’m worried that in the context of getting high-stakes decision makers to use forecasts, some of the demand for rationales is due to lack of trust in the forecasts.
Undoubtedly some of it is. Anecdotally, though, high-level folks frequently take one (or zero) glances at the calibration chart, nod, and then say “but how I am supposed to use this?”, even on questions I pick to be highly relevant to them, just like the paper I cited finding “decision-makers lacking interest in probability estimates.”
Even if you’re (rightly) skeptical about AI-generated rationales, I think the point holds for human rationales. One example: Why did DeepMind hire Swift Centre forecasters when they already had Metaculus forecasts on the same topics, as well as access to a large internal prediction market?
Yeah, I do like your four examples of “just the numbers” forecasts that are valuable: weather, elections, what people believe, and “where is there lots of disagreement? I’m more skeptical that these are useful, rather than curiosity-satisfying.
Election forecasts are a case in point. People will usually prepare for all outcomes regardless of the odds. And if you work in politics, deciding who to choose for VP or where to spend your marginal ad dollar, you need models of voter behavior.
Probably the best case for just-the-numbers is probably your point (b), shift-detection. I echo your point that many people seem struck by the shift in AGI risk on the Metaculus question.
Undoubtedly some of it is. Anecdotally, though, high-level folks frequently take one (or zero) glances at the calibration chart, nod, and then say “but how I am supposed to use this?”, even on questions I pick to be highly relevant to them, just like the paper I cited finding “decision-makers lacking interest in probability estimates.”
Even if you’re (rightly) skeptical about AI-generated rationales, I think the point holds for human rationales. One example: Why did DeepMind hire Swift Centre forecasters when they already had Metaculus forecasts on the same topics, as well as access to a large internal prediction market?