If you look at your forecasting mistakes, do they have a common thread?
A botched Tolstoy quote comes to mind:
Good forecasts are all alike; every mistaken forecast is wrong in its own way
Of course that’s not literally true. But when I reflect on my various mistakes, it’s hard to find a true pattern. To the extent there is one, I’m guessing that the highest-order bit is that many of my mistakes are emotional rather than technical. For example,
doubling down on something in the face of contrary evidence,
or at least not updating enough because I was arrogant,
getting burned that way and then updating too much from minor factors
“updating” from a conversation because it was socially polite to not ignore people rather than their points actually being persuasive, etc.
If the emotion hypothesis is true, to get better at forecasting, the most important thing might well to be looking inwards, rather than say, a) learning more statistics or b) acquiring more facts about the “real world.”
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.
And re:
How does the distribution skill / hours of effort look for forecasting for you?
I would say there’s a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn’t above, say, the 10th percentile.) After that, it’s mostly effort, and skill that is gained via feedback.
A botched Tolstoy quote comes to mind:
Of course that’s not literally true. But when I reflect on my various mistakes, it’s hard to find a true pattern. To the extent there is one, I’m guessing that the highest-order bit is that many of my mistakes are emotional rather than technical. For example,
doubling down on something in the face of contrary evidence,
or at least not updating enough because I was arrogant,
getting burned that way and then updating too much from minor factors
“updating” from a conversation because it was socially polite to not ignore people rather than their points actually being persuasive, etc.
If the emotion hypothesis is true, to get better at forecasting, the most important thing might well to be looking inwards, rather than say, a) learning more statistics or b) acquiring more facts about the “real world.”
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.
And re:
I would say there’s a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn’t above, say, the 10th percentile.) After that, it’s mostly effort, and skill that is gained via feedback.