My sense is that EA overrates forecasting a bit and that the world underates it a lot.
Some underrated views I suggest:
As Michael Story points out (emphasis his): “Most of the useful information you produce [in forecasting] is about the people, not the world outside. Forecasting tournaments and markets are very good at telling you all kinds of things about your participants: are they well calibrated, do they understand the world, do they understand things better working alone or in a team, do they update their beliefs in a sensible measured way or swing about all over the place? If you want to get a rough epistemic ranking out of a group of people then a forecasting tournament or market is ideal. A project like GJP (which I am very proud of) was, contrary to what people often say, not an exercise primarily focused on producing data about the future. It was a study of people! The key discovery of the project wasn’t some vision of the future that nobody else saw, it was discovering the existence of consistently accurate forecasters (“superforecasters”) and techniques to help improve accuracy. The book Superforecasting was all about the forecasters themselves, not the future we spent years of effort predicting as part of the study, which I haven’t heard anyone reference other than as anecdotes about the forecasters.”
I don’t really feel I can add to this quote. Forecasting is useful for filtering people but less useful for finding truth. It’s easy to overrate
It is difficult to forecast things policy makers actually care about. Forecasting sites forecast things like “will Putin leave power” rather than “If Putin leaves power between July 18th and the end of Aug how will that affect the likelihood of a rogue nuclear warhead”. And I’m confident that question isn’t actually specific enough in some way I don’t understand. And even if it were, decision makers would have to trust the results, which they currently largely don’t.
Forecasting beyond 3 years is not good. Anything above .25 is worse than random. Many questions are too specific and too far away for forecasting to be useful to them.
Forecasting is more useful as a filtering tool/a personal improvement tool than as part of better decision making. I suggest that individuals would gain from playing the estimation game each month and taking the hits that reality deals, but there are many things we could do to improve ourselves and if this doesn’t fit for you, fair enough.
But the idea that every org should be forecasting or that every process should involve forecasting doesn’t seem to fit in reality. I still look for ways it can (maybe there is a silver bullet!) but I don’t think you, the median EA should. If you want to test your knowledge of a topic, forecast on it. If you see someone has a good track record, consider taking their thoughts more seriously. Other than that, probably don’t think that much more than you are interested to/you think it’s important to already.
This strongly resonated with me especially after taking part in XPT. I think I set my expectation really high and got frustrated with the process and now take a relaxed approach to forecasting as a fun thing I do on the side instead of something I actively want to take part of as a community.
My sense is that EA overrates forecasting a bit and that the world underates it a lot.
Some underrated views I suggest:
As Michael Story points out (emphasis his): “Most of the useful information you produce [in forecasting] is about the people, not the world outside. Forecasting tournaments and markets are very good at telling you all kinds of things about your participants: are they well calibrated, do they understand the world, do they understand things better working alone or in a team, do they update their beliefs in a sensible measured way or swing about all over the place? If you want to get a rough epistemic ranking out of a group of people then a forecasting tournament or market is ideal. A project like GJP (which I am very proud of) was, contrary to what people often say, not an exercise primarily focused on producing data about the future. It was a study of people! The key discovery of the project wasn’t some vision of the future that nobody else saw, it was discovering the existence of consistently accurate forecasters (“superforecasters”) and techniques to help improve accuracy. The book Superforecasting was all about the forecasters themselves, not the future we spent years of effort predicting as part of the study, which I haven’t heard anyone reference other than as anecdotes about the forecasters.”
I don’t really feel I can add to this quote. Forecasting is useful for filtering people but less useful for finding truth. It’s easy to overrate
It is difficult to forecast things policy makers actually care about. Forecasting sites forecast things like “will Putin leave power” rather than “If Putin leaves power between July 18th and the end of Aug how will that affect the likelihood of a rogue nuclear warhead”. And I’m confident that question isn’t actually specific enough in some way I don’t understand. And even if it were, decision makers would have to trust the results, which they currently largely don’t.
Forecasting beyond 3 years is not good. Anything above .25 is worse than random. Many questions are too specific and too far away for forecasting to be useful to them.
https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons
Forecasting is more useful as a filtering tool/a personal improvement tool than as part of better decision making. I suggest that individuals would gain from playing the estimation game each month and taking the hits that reality deals, but there are many things we could do to improve ourselves and if this doesn’t fit for you, fair enough.
But the idea that every org should be forecasting or that every process should involve forecasting doesn’t seem to fit in reality. I still look for ways it can (maybe there is a silver bullet!) but I don’t think you, the median EA should. If you want to test your knowledge of a topic, forecast on it. If you see someone has a good track record, consider taking their thoughts more seriously. Other than that, probably don’t think that much more than you are interested to/you think it’s important to already.
This strongly resonated with me especially after taking part in XPT. I think I set my expectation really high and got frustrated with the process and now take a relaxed approach to forecasting as a fun thing I do on the side instead of something I actively want to take part of as a community.