When I did some research on the use of forecasting to support government policymaking, one of the issues I quickly encountered was that for some questions, if your forecast is counterfactually (upon making the forecast) accurate and influential upon decision makers, it can lead to policies which prevent the event from occurring and thus making it an inaccurate forecast. Of course, some decisions are not about preventing some event from occurring but rather responding to such an event (e.g., preparedness for a hurricane), in which case there’s not much issue.
I could only skim and keyword search the post and failed to see an emphasis on that, but apologies if I just missed it. Do you think this is less of an issue in EA-relevant forecasting than, e.g., international security policymaking? My extremely underdeveloped intuition has been “probably yes,” but what are your thoughts?
My thoughts are that this problem is, well, not exactly solved, but perhaps solved in practice if you have competent and aligned forecasters, because then you can ask conditional questions which don’t resolve.
Given such and such measures, what will the spread of covid be.
Given the lack of such and such measures, what will the spread of covid be
Then you can still get forecasts for both, even if you only expect the first to go through.
This does require forecasters to give probabilities even when the question they are going to forecast on doesn’t resolve.
This is easier to do with EAs, because then you can just disambiguate the training and the deployment step for forecasters. That is, once you have an EA that is a trustworthy forecaster, you could in principle query them without paying that much attention to scoring rules.
When I did some research on the use of forecasting to support government policymaking, one of the issues I quickly encountered was that for some questions, if your forecast is counterfactually (upon making the forecast) accurate and influential upon decision makers, it can lead to policies which prevent the event from occurring and thus making it an inaccurate forecast. Of course, some decisions are not about preventing some event from occurring but rather responding to such an event (e.g., preparedness for a hurricane), in which case there’s not much issue. I could only skim and keyword search the post and failed to see an emphasis on that, but apologies if I just missed it. Do you think this is less of an issue in EA-relevant forecasting than, e.g., international security policymaking? My extremely underdeveloped intuition has been “probably yes,” but what are your thoughts?
My thoughts are that this problem is, well, not exactly solved, but perhaps solved in practice if you have competent and aligned forecasters, because then you can ask conditional questions which don’t resolve.
Given such and such measures, what will the spread of covid be.
Given the lack of such and such measures, what will the spread of covid be
Then you can still get forecasts for both, even if you only expect the first to go through.
This does require forecasters to give probabilities even when the question they are going to forecast on doesn’t resolve.
This is easier to do with EAs, because then you can just disambiguate the training and the deployment step for forecasters. That is, once you have an EA that is a trustworthy forecaster, you could in principle query them without paying that much attention to scoring rules.