I think the biggest is that EAs (definitely including myself before I started forecasting!) often underestimate the degree to which judgmental forecasting is very much a nascent, pre-paradigm field. This has a lot of knock-on effects, including but not limited to:
Thinking that the final word on forecasting is the judgmental forecasting literature
For example, the forecasting research/literature is focused entirely on accuracy, which has its pitfalls.
There are many fields of human study that does things like forecasting, even if it’s not always called that, including but not limited to:
Relatedly, overestimating how much other good forecasters/aggregation platforms have things figured out.
For example, I think some people over-estimate the added accuracy of prediction markets like PredictIt, or aggregation engines like Metaculus/GJO, or that of top forecasters there.
PredictIt especially basically seems safe to ignore when compared to expert models like 538.
Thinking that there’s “one right way” to do forecasting
If there is, I sure haven’t found it!
I think there’s a lot of prescientific experimentation going on while people are still trying to figure out what the right experiments to do, the right questions to ask, etc., when it comes to advancing the science of forecasting.
Thinking that superforecasting/associated techniques are used a lot in government and business
I think the biggest is that EAs (definitely including myself before I started forecasting!) often underestimate the degree to which judgmental forecasting is very much a nascent, pre-paradigm field. This has a lot of knock-on effects, including but not limited to:
Thinking that the final word on forecasting is the judgmental forecasting literature
For example, the forecasting research/literature is focused entirely on accuracy, which has its pitfalls.
There are many fields of human study that does things like forecasting, even if it’s not always called that, including but not limited to:
Weather forecasting (where Brier score came from!)
Intelligence analysis
Data science
Statistics
Finance
some types of consulting
insurance/reinsurance
epidemiology
…
More broadly, any quantified science needs to make testable predictions
Over-estimating how much superforecasters “have it figured out”
eg here on calibration precision.
Relatedly, overestimating how much other good forecasters/aggregation platforms have things figured out.
For example, I think some people over-estimate the added accuracy of prediction markets like PredictIt, or aggregation engines like Metaculus/GJO, or that of top forecasters there.
PredictIt especially basically seems safe to ignore when compared to expert models like 538.
Thinking that there’s “one right way” to do forecasting
If there is, I sure haven’t found it!
I think there’s a lot of prescientific experimentation going on while people are still trying to figure out what the right experiments to do, the right questions to ask, etc., when it comes to advancing the science of forecasting.
Thinking that superforecasting/associated techniques are used a lot in government and business
It’s not.