If you look at your forecasting mistakes, do they have a common thread?
How is your experience acquiring expertise at forecasting similar/different to acquiring expertise in other domains, e.g. obscure board-games? How so?
Any forecasting ressources you recommend?
Who do you look up to?
How does the distribution skill / hours of effort look for forecasting for you?
Do you want to wax poetically or ramble disorganizedly about any aspects of forecasting?
Any secrets of reality you’ve discovered & which you’d like to share?
If you look at your forecasting mistakes, do they have a common thread?
A botched Tolstoy quote comes to mind:
Good forecasts are all alike; every mistaken forecast is wrong in its own way
Of course that’s not literally true. But when I reflect on my various mistakes, it’s hard to find a true pattern. To the extent there is one, I’m guessing that the highest-order bit is that many of my mistakes are emotional rather than technical. For example,
doubling down on something in the face of contrary evidence,
or at least not updating enough because I was arrogant,
getting burned that way and then updating too much from minor factors
“updating” from a conversation because it was socially polite to not ignore people rather than their points actually being persuasive, etc.
If the emotion hypothesis is true, to get better at forecasting, the most important thing might well to be looking inwards, rather than say, a) learning more statistics or b) acquiring more facts about the “real world.”
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.
And re:
How does the distribution skill / hours of effort look for forecasting for you?
I would say there’s a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn’t above, say, the 10th percentile.) After that, it’s mostly effort, and skill that is gained via feedback.
How is your experience acquiring expertise at forecasting similar/different to acquiring expertise in other domains, e.g. obscure board-games? How so?
Just FYI, I do not consider myself an “expert” on forecasting. I haven’t put my 10,000 hours in, and my inside view is that there’s so much ambiguity and confusion about so many different parameters. I also basically think judgmental amateur forecasting is a nascent field and there are very few experts[1], with the possible exception of the older superforecasters. Nor do I actually think I’m an expert in those games, for similar reasons. I basically think “amateur, but first (or 10th, or 100th, as the case might be) among equals” is a healthier and more honest presentation.
That said, I think the main commonalities for acquiring skill in forecasting and obscure games include:
Focus on generalist optimization for a well-specified score in a constrained system
I think it’s pretty natural for both humans and AI to do better in more limited scenarios.
However, I think in practice, I am much more drawn to those types of problems than my peers (eg I have a lower novelty instinct and I enjoy optimization more).
Deliberate practice through fast feedback loops
Games often have feedback loops on the order of tens of seconds/minutes (Dominion) or hundreds of milliseconds/seconds (Beat Saber)
Forecasting has slower feedback loops, but often you can form an opinion in <30 minutes (sometimes <3 if it’s a domain you’re familiar with), and have it checked in a few days.
In contrast, the feedback loops for other things EA are interested in are often much slower. For example, research might have initial projects on the span of months and have it checked in the span of years, architecture in software engineering might take days to do and weeks to check (and sometimes the time to check is never)
Focus on easy problems
For me personally, it’s often easier for me to get “really good” on less-contested domains than kinda good on very contested domains
For example, I got quite good at Dominion but I bounced pretty quickly off Magic, and I bounced (after a bunch of frustration) off chess.
Another example: in Beat Saber rather than trying hard to beat the harder songs, I spent most of my improving time on getting very high scores for the easier songs
In forecasting, this meant that making covid-19 forecasts 2-8 weeks out was more appealing than making geopolitical forecasts on the timescale of years, or technological forecasts on the timescale of decades
This allowed me to slowly and comfortably move into harder questions
For example now I have more confidence and internal models on predicting covid-19 questions multiple months out.
If I were to get back into Beat Saber, I’d be a lot less scared of the harder songs than I used to be (after some time ramping back up).
I do think not being willing to jump into harder problems directly is something of a character flaw. I’d be interested in hearing other people’s thoughts on how they do this.
The main difference, to me is that:
Forecasting relies on knowledge of the real world
As opposed to games (and for that matter programming challenges) the “system” that you’re forecasting on is usually much more unbounded.
So knowledge acquisition and value-of-information is much more important per question
This is in contrast to games, where knowledge acquisition is important on the “meta-level” but for any specific game,
balancing how much knowledge you need to acquire is pretty natural/intuitive.
and you probably don’t need much new knowledge anyway.
[1] For reasons I might go into later in a different answer
If you look at your forecasting mistakes, do they have a common thread?
How is your experience acquiring expertise at forecasting similar/different to acquiring expertise in other domains, e.g. obscure board-games? How so?
Any forecasting ressources you recommend?
Who do you look up to?
How does the distribution skill / hours of effort look for forecasting for you?
Do you want to wax poetically or ramble disorganizedly about any aspects of forecasting?
Any secrets of reality you’ve discovered & which you’d like to share?
A botched Tolstoy quote comes to mind:
Of course that’s not literally true. But when I reflect on my various mistakes, it’s hard to find a true pattern. To the extent there is one, I’m guessing that the highest-order bit is that many of my mistakes are emotional rather than technical. For example,
doubling down on something in the face of contrary evidence,
or at least not updating enough because I was arrogant,
getting burned that way and then updating too much from minor factors
“updating” from a conversation because it was socially polite to not ignore people rather than their points actually being persuasive, etc.
If the emotion hypothesis is true, to get better at forecasting, the most important thing might well to be looking inwards, rather than say, a) learning more statistics or b) acquiring more facts about the “real world.”
I think that as you forecast different domains, more common themes can start to emerge. And I certainly find that my calibration is off when I feel personally invested in the answer.
And re:
I would say there’s a sharp cutoff in terms of needing a minimal level of understanding (which seems to be fairly high, but certainly isn’t above, say, the 10th percentile.) After that, it’s mostly effort, and skill that is gained via feedback.
Just FYI, I do not consider myself an “expert” on forecasting. I haven’t put my 10,000 hours in, and my inside view is that there’s so much ambiguity and confusion about so many different parameters. I also basically think judgmental amateur forecasting is a nascent field and there are very few experts[1], with the possible exception of the older superforecasters. Nor do I actually think I’m an expert in those games, for similar reasons. I basically think “amateur, but first (or 10th, or 100th, as the case might be) among equals” is a healthier and more honest presentation.
That said, I think the main commonalities for acquiring skill in forecasting and obscure games include:
Focus on generalist optimization for a well-specified score in a constrained system
I think it’s pretty natural for both humans and AI to do better in more limited scenarios.
However, I think in practice, I am much more drawn to those types of problems than my peers (eg I have a lower novelty instinct and I enjoy optimization more).
Deliberate practice through fast feedback loops
Games often have feedback loops on the order of tens of seconds/minutes (Dominion) or hundreds of milliseconds/seconds (Beat Saber)
Forecasting has slower feedback loops, but often you can form an opinion in <30 minutes (sometimes <3 if it’s a domain you’re familiar with), and have it checked in a few days.
In contrast, the feedback loops for other things EA are interested in are often much slower. For example, research might have initial projects on the span of months and have it checked in the span of years, architecture in software engineering might take days to do and weeks to check (and sometimes the time to check is never)
Focus on easy problems
For me personally, it’s often easier for me to get “really good” on less-contested domains than kinda good on very contested domains
For example, I got quite good at Dominion but I bounced pretty quickly off Magic, and I bounced (after a bunch of frustration) off chess.
Another example: in Beat Saber rather than trying hard to beat the harder songs, I spent most of my improving time on getting very high scores for the easier songs
In forecasting, this meant that making covid-19 forecasts 2-8 weeks out was more appealing than making geopolitical forecasts on the timescale of years, or technological forecasts on the timescale of decades
This allowed me to slowly and comfortably move into harder questions
For example now I have more confidence and internal models on predicting covid-19 questions multiple months out.
If I were to get back into Beat Saber, I’d be a lot less scared of the harder songs than I used to be (after some time ramping back up).
I do think not being willing to jump into harder problems directly is something of a character flaw. I’d be interested in hearing other people’s thoughts on how they do this.
The main difference, to me is that:
Forecasting relies on knowledge of the real world
As opposed to games (and for that matter programming challenges) the “system” that you’re forecasting on is usually much more unbounded.
So knowledge acquisition and value-of-information is much more important per question
This is in contrast to games, where knowledge acquisition is important on the “meta-level” but for any specific game,
balancing how much knowledge you need to acquire is pretty natural/intuitive.
and you probably don’t need much new knowledge anyway.
[1] For reasons I might go into later in a different answer