It would be interesting whether the forecasters with outlier numbers stand by those forecasts on reflection, and to hear their reasoning if so. In cases where outlier forecasts reflect insight, how do we capture that insight rather than brushing them aside with the noise? Checking in with those forecasters after their forecasts have been flagged as suspicious-to-others is a start.
The p(month|year) number is especially relevant, since that is not just an input into the bottom line estimate, but also has direct implications for individual planning. The plan ‘if Russia uses a nuclear weapon in Ukraine then I will leave my home to go someplace safer’ looks pretty different depending on whether the period of heightened risk when you will be away from home is more like 2 weeks or 6 months.
It would be interesting whether the forecasters with outlier numbers stand by those forecasts on reflection, and to hear their reasoning if so. In cases where outlier forecasts reflect insight, how do we capture that insight rather than brushing them aside with the noise? Checking in with those forecasters after their forecasts have been flagged as suspicious-to-others is a start.
The p(month|year) number is especially relevant, since that is not just an input into the bottom line estimate, but also has direct implications for individual planning. The plan ‘if Russia uses a nuclear weapon in Ukraine then I will leave my home to go someplace safer’ looks pretty different depending on whether the period of heightened risk when you will be away from home is more like 2 weeks or 6 months.