What do you think helps make you a better forecaster than the other 989+ people?
I’ll instead answer this as:
What helps you have a higher rating than most of the people below you on the leaderboard?
I probably answered more questions than most of them.
I update my forecasts more quickly than most of them, particularly in March and April
Activity has consistently been shown to be one of (often, the) strongest predictors of overall accuracy in the academic literature.
I suspect I have a much stronger intuitive sense of probability/calibration.
For example, 17% (1:5) intuitively feels very different to me than 20% (1:4), and my sense is that this isn’t too common
This could just be arrogance however, there isn’t enough data for me to actually check this for actual predictions (as opposed to just calibration games)
I feel like I actually have lower epistemic humility compared to most forecasters who are top 100 or so on Metaculus. “Epistemic humility” defined narrowly as “willingness to make updates based on arguments I don’t find internally plausible just because others believed them.”
Caveat is that I’m making this comparison solely to top X% (in either activity or accuracy) forecasters.
I suspect a fair number of other forecasters are just wildly overconfident (in both senses of the term)
Certainly, non-forecasters (TV pundits, say, or just people I see on the internet) frequently seem very overconfident for what seems to me like bad reasons.
A certain epistemic attitude that I associate with both Silicon Valley and Less Wrong/rationalist culture is “strong opinions, held lightly”
This is where you believe concrete, explicit and overly specific models of the world strongly, but you quickly update whenever someone points out a hole in your reasoning.
I suspect this attitude is good for things like software design and maybe novel research, but is bad for having good explicit probabilities for Metaculus-style questions.
I’m a pretty competitive person, and I care about scoring well.
This might be surprising, but I think a lot of forecasters don’t.
Some forecasters just want to record their predictions publicly and be held accountable to them, or want to cultivate more epistemic humility by seeing themselves be wrong
I think these are perfectly legitimate uses of forecasting, and I actively encourage my friends to use Metaculus and other prediction platforms to do this.
However, it should not be surprising that people who want to score well end up on average scoring better.
So I do a bunch of things like meditate on my mistakes and try really hard to do better. I think most forecasters, including good ones, do this much less than I do.
I know more facts about covid-19.
I think the value of this is actually exaggerated, but it probably helps a little.
_____
What do you think other forecasters do to make them have a higher rating than you? [Paraphrased]
Okay, a major caveat here is that I think there is plenty of heterogeneity among forecasters. Another is that I obviously don’t have clear insight into why other forecasters are better than me (otherwise I’d have done better!) However, in general I’m guessing they:
Have more experience with forecasting.
I started in early March and I think many of them have already been forecasting for a year or more (some 5+ years!).
I think experience probably helps a lot in building intuition and avoiding a lot of subtle (and not-so-subtle!) mistakes.
They usually forecast more questions.
It takes me some effort to forecast on new questions, particularly if the template is different from other questions I’ve forecasted on before, and they aren’t something I’ve thought about before in a non-forecasting context
I know some people in the Top 10 literally forecast all questions on Metaculus, which seems like a large time commitment to me.
They update forecasts more quickly than me, particularly in May and June.
Back in March and April, I was *super* “on top of my game.” But right now I have a backlog of old predictions, of which I’m >30 days behind on the earliest one (as in, the last time I updated that prediction was 30+ days ago).
This is partially due to doing more covid forecasting on day job, partially due to having some other hobbies, and partially due to general fatigue/loss of interest (akin to lockdown fatigue from others)
On average, they’re more inclined to do simple mathematical modeling (Guesstimate, Excel, Google Sheets, foretold etc), whereas personally I’m often (not always) satisfied with a few jotted notes on a Google Doc plus a simple arithmetic calculator.
There are also more specific reasons some other forecasters are better than me, but I don’t think all or even most of the forecasters better than me have:
JGalt seems to read the news both more and more efficiently than I do, and probably knows much more factual information than me.
In particular, I recall many times where I see interesting news on Twitter or other places, want to bring it Metaculus, and bam, JGalt has already linked it ahead of me.
This is practically a running meme among Metaculus users that JGalt has read all the news.
Lukas Gloor and Pablo Stafforini plausibly has a stronger internal causal model of various covid-19 related issues.
datscilly often decomposes questions more cleanly than me, and (unlike me and several other forecasters), appears to aggressively prioritize not updating on irrelevant information.
He also cares about scores more than I do.
I think Pablo, datscilly and some others started predicting on covid-19 questions almost as soon as the pandemic started, so they built up more experience than me not only on general forecasting, but also on forecasting covid-19 related questions specifically.
At least this is what I can gather from their public comments and (in some cases) private conversations. It’s much harder for me to interpret how forecasters higher than me on the leaderboard but are otherwise mostly silent think.
I’ll instead answer this as:
I probably answered more questions than most of them.
I update my forecasts more quickly than most of them, particularly in March and April
Activity has consistently been shown to be one of (often, the) strongest predictors of overall accuracy in the academic literature.
I suspect I have a much stronger intuitive sense of probability/calibration.
For example, 17% (1:5) intuitively feels very different to me than 20% (1:4), and my sense is that this isn’t too common
This could just be arrogance however, there isn’t enough data for me to actually check this for actual predictions (as opposed to just calibration games)
I feel like I actually have lower epistemic humility compared to most forecasters who are top 100 or so on Metaculus. “Epistemic humility” defined narrowly as “willingness to make updates based on arguments I don’t find internally plausible just because others believed them.”
Caveat is that I’m making this comparison solely to top X% (in either activity or accuracy) forecasters.
I suspect a fair number of other forecasters are just wildly overconfident (in both senses of the term)
Certainly, non-forecasters (TV pundits, say, or just people I see on the internet) frequently seem very overconfident for what seems to me like bad reasons.
A certain epistemic attitude that I associate with both Silicon Valley and Less Wrong/rationalist culture is “strong opinions, held lightly”
This is where you believe concrete, explicit and overly specific models of the world strongly, but you quickly update whenever someone points out a hole in your reasoning.
I suspect this attitude is good for things like software design and maybe novel research, but is bad for having good explicit probabilities for Metaculus-style questions.
I’m a pretty competitive person, and I care about scoring well.
This might be surprising, but I think a lot of forecasters don’t.
Some forecasters just want to record their predictions publicly and be held accountable to them, or want to cultivate more epistemic humility by seeing themselves be wrong
I think these are perfectly legitimate uses of forecasting, and I actively encourage my friends to use Metaculus and other prediction platforms to do this.
However, it should not be surprising that people who want to score well end up on average scoring better.
So I do a bunch of things like meditate on my mistakes and try really hard to do better. I think most forecasters, including good ones, do this much less than I do.
I know more facts about covid-19.
I think the value of this is actually exaggerated, but it probably helps a little.
_____
Okay, a major caveat here is that I think there is plenty of heterogeneity among forecasters. Another is that I obviously don’t have clear insight into why other forecasters are better than me (otherwise I’d have done better!) However, in general I’m guessing they:
Have more experience with forecasting.
I started in early March and I think many of them have already been forecasting for a year or more (some 5+ years!).
I think experience probably helps a lot in building intuition and avoiding a lot of subtle (and not-so-subtle!) mistakes.
They usually forecast more questions.
It takes me some effort to forecast on new questions, particularly if the template is different from other questions I’ve forecasted on before, and they aren’t something I’ve thought about before in a non-forecasting context
I know some people in the Top 10 literally forecast all questions on Metaculus, which seems like a large time commitment to me.
They update forecasts more quickly than me, particularly in May and June.
Back in March and April, I was *super* “on top of my game.” But right now I have a backlog of old predictions, of which I’m >30 days behind on the earliest one (as in, the last time I updated that prediction was 30+ days ago).
This is partially due to doing more covid forecasting on day job, partially due to having some other hobbies, and partially due to general fatigue/loss of interest (akin to lockdown fatigue from others)
On average, they’re more inclined to do simple mathematical modeling (Guesstimate, Excel, Google Sheets, foretold etc), whereas personally I’m often (not always) satisfied with a few jotted notes on a Google Doc plus a simple arithmetic calculator.
There are also more specific reasons some other forecasters are better than me, but I don’t think all or even most of the forecasters better than me have:
JGalt seems to read the news both more and more efficiently than I do, and probably knows much more factual information than me.
In particular, I recall many times where I see interesting news on Twitter or other places, want to bring it Metaculus, and bam, JGalt has already linked it ahead of me.
This is practically a running meme among Metaculus users that JGalt has read all the news.
Lukas Gloor and Pablo Stafforini plausibly has a stronger internal causal model of various covid-19 related issues.
datscilly often decomposes questions more cleanly than me, and (unlike me and several other forecasters), appears to aggressively prioritize not updating on irrelevant information.
He also cares about scores more than I do.
I think Pablo, datscilly and some others started predicting on covid-19 questions almost as soon as the pandemic started, so they built up more experience than me not only on general forecasting, but also on forecasting covid-19 related questions specifically.
At least this is what I can gather from their public comments and (in some cases) private conversations. It’s much harder for me to interpret how forecasters higher than me on the leaderboard but are otherwise mostly silent think.
1.) This is amazing, thank you. Strongly upvoted—I learned a lot.
2.) Can we have an AMA with JGalt where he teaches us how to read all the news?