Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)
Thanks, I think that’s a good question. Some (overlapping) reasons that come to mind that I give some credence to:
a) relevant markets are simply making an error in neglecting quantified forecasts
e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified
b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that’s my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)
c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it
d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)
e) maybe more systematically I’m thinking that it’s often not in the interest of entrenched powers to have forecasters call bs on whatever they’re doing.
in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
in other arenas there seems to be a constant risk of forecasters raining on your parade
f) maybe previous forecast-like practices (“futures studies”, “scenario planning”) maybe didn’t yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I’ve seen associated with these words)