I‘m really excited about more thinking and grant-making going into forecasting!
Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:
Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions
- Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies
In all important domains where humans try to affect things, they are implicitly forecasting all the time and act on those forecasts. Random examples:
- “If lab-grown meat becomes cheaper than normal meat, XY% of consumers will switch”
- “A marginal supply of 10,000 bednets will decrease malaria infections by XY%”
- Models of climate change projections conditional on emmissions
In many domains humans are already explicitly forecasting and acting on those forecasts
- Insurance (e.g. forecasts on loan payments)
- Finance (e.g. on interest rate changes)
- Recidivism
- Weather
- Climate
Increases in use of forecasting has the potential to increase societal sanity
- Make people more able to appreciate and process uncertainty in important domains
- Clearer communication (e.g. less talking past one another by anchoring discussion on real world outcomes)
- Establish feedback loops with resolvable forecasts ➔ stronger incentives for being correct & ability to select people who have better world models
That said, I also think that it’s often surprisingly difficult to ask actionable questions when forecasting, and often it might be more important to just have a small team of empowered people with expert knowledge combined with closely coupled OODA loops instead. I remember finding this comment from Jan Kulveit pretty informative:
In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness—that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input—that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).
I don’t think the drivers of low “societal sanity” are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift’s love life is part of a conspiracy to re-elect Biden isn’t that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your “team” runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.
Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)
I agree that things like confirmation bias and myside bias are huge drivers impeding “societal sanity”. And I also agree that it won’t help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it’s currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it’s not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.
I‘m really excited about more thinking and grant-making going into forecasting!
Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:
Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies
In all important domains where humans try to affect things, they are implicitly forecasting all the time and act on those forecasts. Random examples: - “If lab-grown meat becomes cheaper than normal meat, XY% of consumers will switch” - “A marginal supply of 10,000 bednets will decrease malaria infections by XY%” - Models of climate change projections conditional on emmissions
In many domains humans are already explicitly forecasting and acting on those forecasts - Insurance (e.g. forecasts on loan payments) - Finance (e.g. on interest rate changes) - Recidivism - Weather - Climate
Increases in use of forecasting has the potential to increase societal sanity - Make people more able to appreciate and process uncertainty in important domains - Clearer communication (e.g. less talking past one another by anchoring discussion on real world outcomes) - Establish feedback loops with resolvable forecasts ➔ stronger incentives for being correct & ability to select people who have better world models
That said, I also think that it’s often surprisingly difficult to ask actionable questions when forecasting, and often it might be more important to just have a small team of empowered people with expert knowledge combined with closely coupled OODA loops instead. I remember finding this comment from Jan Kulveit pretty informative:
Source: https://ea.greaterwrong.com/posts/by8u954PjM2ctcve7/experimental-longtermism-theory-needs-data#comment-HgbppQzz3G3hLdhBu
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness—that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input—that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).
I don’t think the drivers of low “societal sanity” are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift’s love life is part of a conspiracy to re-elect Biden isn’t that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your “team” runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.
Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)
I agree that things like confirmation bias and myside bias are huge drivers impeding “societal sanity”. And I also agree that it won’t help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it’s currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it’s not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.