Here are some things I’ve learned from spending a decent fraction of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors.
Before reading this post, I recommend brushing up on Tetlock’s work on (super)forecasting, particularly Tetlock’s 10 commandments for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible.
I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It’s often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this.
2. For novel out-of-distribution situations, “normal” people often trust centralized data/ontologies more than is warranted.
See here for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable.
3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.
(Note that I think this is an improvement over the status quo in the broader society, where by default approximately nobody trusts generalist forecasters at all)
I’ve had several conversations where EAs will ask me to make a prediction, I’ll think about it a bit and say something like “I dunno, 10%?”and people will treat it like a fully informed prediction to make decisions about, rather than just another source of information among many.
I think this is clearly wrong. I think in almost any situation where you are a reasonable person and you spent 10x (sometimes 100x or more!) time thinking about a question then I have, you should just trust your own judgments much more than mine on the question.
To a first approximation, good forecasters have three things: 1) They’re fairly smart. 2) They’re willing to actually do the homework. 3) They have an intuitive sense of probability.
This is not nothing, but it’s also pretty far from everything you want in a epistemic source.
4. The EA community overrates Superforecasters and Superforecasting techniques.
I think the types of questions and responses Good Judgment .* is interested in is a particularway to look at the world. I don’t think it is always applicable (easy EA-relevant example: your Brier score is basically the same if you give 0% for 1% probabilities, and vice versa), and it’s bad epistemics to collapse all of the “figure out the future in a quantifiable manner” to a single paradigm.
Likewise, I don’t think there’s a clear dividing line between good forecasters and GJP-certified Superforecasters, so many of the issues I mentioned in #3 are just as applicable here.
I’m not sure how to collapse all the things I’ve learned on this topic in a few short paragraphs, but the tl;dr is that I trusted superforecasters much more than I trusted other EAs before I started forecasting stuff, and now I consider their opinions and forecasts “just” an important overall component to my thinking, rather than a clear epistemic superior to defer to.
5. Good intuitions are really important.
I think there’s a Straw Vulcan approach to rationality where people think “good” rationality is about suppressing your System 1 in favor of clear thinking and logical propositions from your system 2. I think there’s plenty of evidence for this being wrong*. For example, the cognitive reflection test was originally supposed to be a test of how well people suppress their “intuitive” answers to instead think through the question and provide the right “unintuitive answers”, however we’ve later learned (one fairly good psych study. May not replicate, seems to accord with my intuitions and recent experiences) that more “cognitively reflective” people also had more accurate initial answers when they didn’t have the time to think through the question.
On a more practical level, I think a fair amount of good thinking is using your System 2 to train your intuitions, so you have better and better first impressions and taste for how to improve your understanding of the world in the future.
*I think my claim so far is fairly uncontroversial, for example I expect CFAR to agree with a lot of what I say.
6. Relatedly, most of my forecasting mistakes are due to emotional rather than technical reasons.
Here’s a Twitter thread from May exploring why; I think I still mostly stand by it.
Some learnings I had from forecasting in 2020
crossposted from my own short-form
Here are some things I’ve learned from spending a decent fraction of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors.
Before reading this post, I recommend brushing up on Tetlock’s work on (super)forecasting, particularly Tetlock’s 10 commandments for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible.
I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It’s often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this.
2. For novel out-of-distribution situations, “normal” people often trust centralized data/ontologies more than is warranted.
See here for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable.
3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.
(Note that I think this is an improvement over the status quo in the broader society, where by default approximately nobody trusts generalist forecasters at all)
I’ve had several conversations where EAs will ask me to make a prediction, I’ll think about it a bit and say something like “I dunno, 10%?”and people will treat it like a fully informed prediction to make decisions about, rather than just another source of information among many.
I think this is clearly wrong. I think in almost any situation where you are a reasonable person and you spent 10x (sometimes 100x or more!) time thinking about a question then I have, you should just trust your own judgments much more than mine on the question.
To a first approximation, good forecasters have three things: 1) They’re fairly smart. 2) They’re willing to actually do the homework. 3) They have an intuitive sense of probability.
This is not nothing, but it’s also pretty far from everything you want in a epistemic source.
4. The EA community overrates Superforecasters and Superforecasting techniques.
I think the types of questions and responses Good Judgment .* is interested in is a particular way to look at the world. I don’t think it is always applicable (easy EA-relevant example: your Brier score is basically the same if you give 0% for 1% probabilities, and vice versa), and it’s bad epistemics to collapse all of the “figure out the future in a quantifiable manner” to a single paradigm.
Likewise, I don’t think there’s a clear dividing line between good forecasters and GJP-certified Superforecasters, so many of the issues I mentioned in #3 are just as applicable here.
I’m not sure how to collapse all the things I’ve learned on this topic in a few short paragraphs, but the tl;dr is that I trusted superforecasters much more than I trusted other EAs before I started forecasting stuff, and now I consider their opinions and forecasts “just” an important overall component to my thinking, rather than a clear epistemic superior to defer to.
5. Good intuitions are really important.
I think there’s a Straw Vulcan approach to rationality where people think “good” rationality is about suppressing your System 1 in favor of clear thinking and logical propositions from your system 2. I think there’s plenty of evidence for this being wrong*. For example, the cognitive reflection test was originally supposed to be a test of how well people suppress their “intuitive” answers to instead think through the question and provide the right “unintuitive answers”, however we’ve later learned (one fairly good psych study. May not replicate, seems to accord with my intuitions and recent experiences) that more “cognitively reflective” people also had more accurate initial answers when they didn’t have the time to think through the question.
On a more practical level, I think a fair amount of good thinking is using your System 2 to train your intuitions, so you have better and better first impressions and taste for how to improve your understanding of the world in the future.
*I think my claim so far is fairly uncontroversial, for example I expect CFAR to agree with a lot of what I say.
6. Relatedly, most of my forecasting mistakes are due to emotional rather than technical reasons.
Here’s a Twitter thread from May exploring why; I think I still mostly stand by it.