What should a typical EA who is informed on the standard forecasting advice do if they actually want to become good at forecasting? What did you do to hone your skill?
My guess is to just forecast a lot! The most important part is probably just practicing a lot and evaluating how well you did.
Beyond that, my instinct is that the closer you can get to deliberate practice the more you can improve. My guess is that there’s multiple desiderata that’s hard to satisfy all at once, so you do have to make some tradeoffs between them.
As close to the target domain of what you actually care about as possible. For example, if you care about having accurate forecasts on which psychological results are true, covid-19 tournaments or geo-political forecasting are less helpful than replication markets.
Can answer lots of questions and have fast feedback loops. For example, if the question you really care about is “will humans be extinct by 3000 AD?” you probably want to answer a bunch of other short term questions first to build up your forecasting muscles to actually have a better sense of these harder questions.
Can initially be easy to evaluate well. For example, if you want to answer “will AI turn out well?” it might be helpful to answer a bunch of easy-to-evaluate questions first and grade them.
In case you’re not aware of this, I think there’s also some evidence that calibration games, like OpenPhil’s app, is pretty helpful.
Being meta-cognitive and reflective of your mistakes likely helps too.
In particular, beyond just calibration, you want to have a strong internal sense of when and how much your forecasts can update based on new information. If you update too much, then this is probably evidence that your beliefs should be closer to the naive prior (if you went from 20% to 80% to 20% to 75% to 40% in one day, you probably didn’t really believe it was 20% to start with). If you update too little, then maybe the bar of evidence for you to change your mind is too high.
What did you do to hone your skill?
Before I started forecasting seriously, I attended several forecasting meetups that my co-organizer of South Bay Effective Altruism ran. Maybe going through the worksheets will be helpful here?
One thing I did that was pretty extreme was that I very closely followed a lot of forecasting-relevant details of covid-19. I didn’t learn a lot of theoretical epidemiology, but when I was most “on top of things” (I think around late April to early May), I was basically closely following the disease trajectory, policies, and data ambiguities of ~20 different countries. I also read pretty much every halfway decent paper on covid-19 fatality rates that I could find, and skimmed the rest.
I think this is really extreme and I suspect very few forecasters do it to that level. Even I stopped trying to keep up because it was getting too much (and I started forecasting narrower questions professionally, plus had more of a social life). However, I think it is generally the case that forecasters usually know quite a lot of specific details of the thing they’re forecasting: nowhere near that of subject matter experts, but they also have a lot more focus into the forecasting-relevant details, as opposed to grand theories or interesting frontiers of research.
That being said, I think it’s plausible a lot of this knowledge is a spandrel and not actually that helpful for making forecasts. This answer is already too long but I might go in more detail about why I believe factual knowledge is a little overrated in other answers.
I also think that by the time I started Forecasting seriously, I probably started with a large leg up because (as many of you know) I spend a lot of my time arguing online. I highly doubt it’s the most effective way to train forecasting skills (see first bullet point), and I’m dubious it’s a good use of time in general. However, if we ignore efficiency, I definitely think the way I argued/communicated was a decent way to train having above-average general epistemology and understanding of the world.
Other forecasters often have backgrounds (whether serious hobbies or professional expertise) in things that require or strongly benefit from having a strong intuitive understanding of probability. Examples include semi-professional poker, working in finance, data science, some academic subfields (eg AI, psychology) and sometimes domain expertise (eg epidemiology).
It is unclear to me how much of these things are selection effects vs training, but I suspect that at this stage, a lot of the differences in forecasting success (>60%?) is explainable by practice and training, or just literally forecasting a lot.
I mostly just forecasted the covid-19 questions on Metaculus directly. I do think predicting covid early on (before May?) was a near-ideal epistemic environment for this, because of various factors like
a) important
b) in a weird social epistemic state where lots of disparate, individually easy to understand, true information is out there
c) where lots of false information is out there
d) have very fast feedback loops and
e) predicting things/truth-seeking is shocking uncompetitive.
The feedback cycle (maybe several times a week for some individual questions) are still slower than what the deliberate practice research was focused on (specific techniques in arts and sports with sub-minute feedback). But it’s much much better than other plausibly important things.
I probably also benefited from practice through the South Bay EA meetups[1] and the Open Phil calibration game[2].
[1] If going through all the worksheets is intimidating, I recommend just trying this one (start with “Intro to forecasting” and then do the “Intro to forecasting worksheet.” EDIT 2020/07/04: Fixed worksheet.
What should a typical EA who is informed on the standard forecasting advice do if they actually want to become good at forecasting? What did you do to hone your skill?
My guess is to just forecast a lot! The most important part is probably just practicing a lot and evaluating how well you did.
Beyond that, my instinct is that the closer you can get to deliberate practice the more you can improve. My guess is that there’s multiple desiderata that’s hard to satisfy all at once, so you do have to make some tradeoffs between them.
As close to the target domain of what you actually care about as possible. For example, if you care about having accurate forecasts on which psychological results are true, covid-19 tournaments or geo-political forecasting are less helpful than replication markets.
Can answer lots of questions and have fast feedback loops. For example, if the question you really care about is “will humans be extinct by 3000 AD?” you probably want to answer a bunch of other short term questions first to build up your forecasting muscles to actually have a better sense of these harder questions.
Can initially be easy to evaluate well. For example, if you want to answer “will AI turn out well?” it might be helpful to answer a bunch of easy-to-evaluate questions first and grade them.
In case you’re not aware of this, I think there’s also some evidence that calibration games, like OpenPhil’s app, is pretty helpful.
Being meta-cognitive and reflective of your mistakes likely helps too.
In particular, beyond just calibration, you want to have a strong internal sense of when and how much your forecasts can update based on new information. If you update too much, then this is probably evidence that your beliefs should be closer to the naive prior (if you went from 20% to 80% to 20% to 75% to 40% in one day, you probably didn’t really believe it was 20% to start with). If you update too little, then maybe the bar of evidence for you to change your mind is too high.
Before I started forecasting seriously, I attended several forecasting meetups that my co-organizer of South Bay Effective Altruism ran. Maybe going through the worksheets will be helpful here?
One thing I did that was pretty extreme was that I very closely followed a lot of forecasting-relevant details of covid-19. I didn’t learn a lot of theoretical epidemiology, but when I was most “on top of things” (I think around late April to early May), I was basically closely following the disease trajectory, policies, and data ambiguities of ~20 different countries. I also read pretty much every halfway decent paper on covid-19 fatality rates that I could find, and skimmed the rest.
I think this is really extreme and I suspect very few forecasters do it to that level. Even I stopped trying to keep up because it was getting too much (and I started forecasting narrower questions professionally, plus had more of a social life). However, I think it is generally the case that forecasters usually know quite a lot of specific details of the thing they’re forecasting: nowhere near that of subject matter experts, but they also have a lot more focus into the forecasting-relevant details, as opposed to grand theories or interesting frontiers of research.
That being said, I think it’s plausible a lot of this knowledge is a spandrel and not actually that helpful for making forecasts. This answer is already too long but I might go in more detail about why I believe factual knowledge is a little overrated in other answers.
I also think that by the time I started Forecasting seriously, I probably started with a large leg up because (as many of you know) I spend a lot of my time arguing online. I highly doubt it’s the most effective way to train forecasting skills (see first bullet point), and I’m dubious it’s a good use of time in general. However, if we ignore efficiency, I definitely think the way I argued/communicated was a decent way to train having above-average general epistemology and understanding of the world.
Other forecasters often have backgrounds (whether serious hobbies or professional expertise) in things that require or strongly benefit from having a strong intuitive understanding of probability. Examples include semi-professional poker, working in finance, data science, some academic subfields (eg AI, psychology) and sometimes domain expertise (eg epidemiology).
It is unclear to me how much of these things are selection effects vs training, but I suspect that at this stage, a lot of the differences in forecasting success (>60%?) is explainable by practice and training, or just literally forecasting a lot.
What sort of training material did you use to predict and get feedback on (#deliberate practice)
I mostly just forecasted the covid-19 questions on Metaculus directly. I do think predicting covid early on (before May?) was a near-ideal epistemic environment for this, because of various factors like
The feedback cycle (maybe several times a week for some individual questions) are still slower than what the deliberate practice research was focused on (specific techniques in arts and sports with sub-minute feedback). But it’s much much better than other plausibly important things.
I probably also benefited from practice through the South Bay EA meetups[1] and the Open Phil calibration game[2].
[1] If going through all the worksheets is intimidating, I recommend just trying this one (start with “Intro to forecasting” and then do the “Intro to forecasting worksheet.” EDIT 2020/07/04: Fixed worksheet.
[2] https://www.openphilanthropy.org/blog/new-web-app-calibration-training