Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness—that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input—that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).
I don’t think the drivers of low “societal sanity” are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift’s love life is part of a conspiracy to re-elect Biden isn’t that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your “team” runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.
Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)
I agree that things like confirmation bias and myside bias are huge drivers impeding “societal sanity”. And I also agree that it won’t help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it’s currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it’s not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness—that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input—that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).
I don’t think the drivers of low “societal sanity” are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift’s love life is part of a conspiracy to re-elect Biden isn’t that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your “team” runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.
Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)
I agree that things like confirmation bias and myside bias are huge drivers impeding “societal sanity”. And I also agree that it won’t help a lot here to develop tools to refine probabilities slightly more.
That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it’s currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it’s not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.