The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.
This seems to be true and also to be an emerging consensus (at least here on the forum).
I’ve only been forecasting for a few months, but it’s starting to seem to me like forecasting does have quite a lot of value—as valuable training in reasoning, and as a way of enforcing a common language around discussion of possible futures. The accuracy of the predictions themselves seems secondary to the way that forecasting serves as a calibration exercise. I’d really like to see empirical work on this, but anecdotally it does feel like it has improved my own reasoning somewhat. Curious to hear your thoughts.
This seems to be true and also to be an emerging consensus (at least here on the forum).
Can you point to some examples?
I’ve only been forecasting for a few months, but it’s starting to seem to me like forecasting does have quite a lot of value...
This seems right to me. I think society as a whole underprices forecasting, and EA underprices a bunch of subniches within forecasting (even if they overrate predictive validity specifically).
Most of these cases are[...]purely imaginary[...]. We can use them to discover, not what the truth is, but what we believe
Similarly, I think of a lot of the value of inputting probabilities and distributions is as a way to have internal coherence/validity, to help represent/bring to the forefront of what I believe.
...and as a way of enforcing a common language around discussion of possible futures
This sounds right to me. Stefan Schubert has a fun comparison of forecasting and analytic philosophy.
This seems to be true and also to be an emerging consensus (at least here on the forum).
I’ve only been forecasting for a few months, but it’s starting to seem to me like forecasting does have quite a lot of value—as valuable training in reasoning, and as a way of enforcing a common language around discussion of possible futures. The accuracy of the predictions themselves seems secondary to the way that forecasting serves as a calibration exercise. I’d really like to see empirical work on this, but anecdotally it does feel like it has improved my own reasoning somewhat. Curious to hear your thoughts.
Thanks for the comment!
Can you point to some examples?
This seems right to me. I think society as a whole underprices forecasting, and EA underprices a bunch of subniches within forecasting (even if they overrate predictive validity specifically).
I think this is right. I think to some degree, the value of forecasting is similar to what Parfit ascribes to thought experiments:
Similarly, I think of a lot of the value of inputting probabilities and distributions is as a way to have internal coherence/validity, to help represent/bring to the forefront of what I believe.
This sounds right to me. Stefan Schubert has a fun comparison of forecasting and analytic philosophy.