Clay Graubard pointed out that Tetlock initially answered skeptics’ suspicions by pointing out that there was a “goldilocks zone” of forecasters less than a few years out for which we have good past data and good information, and that forecasting was meaningfully better within that goldilocks zone. But existential risk seems like a pretty different beast, and pretty far from that goldilocks zone. [Emphasis added]
You’ve won Nuño. I made an account. My comment on the Google Doc was (slightly edited):
So, this is part of the issue.
I think it’s better to say we have good data and good explanatory theory (or ontological priors) within the Goldilocks Zone, while also having a mangable degree of system effects. THis could be described as being in either quadrants 1 (ideally), 2, and 3 in Lustick and Tetlock’s Simulation Manifesto.
The issue with x-risk forecasting is not just a lack of theory or data, but of compounding and worsening (ultimately debilitating) system effects.
These are not only very difficult to untangble, but even if we could untangle them their complexity and interconnectedness make small errors fatal. Remember how the basic fact that state polling errors could be dependent rather than independent made 538′s election night forecast 71.4% versus HuffPo’s 98% (brier improvement of ~50%)? Now imagine that effect compounded a few times over. The difference between a 85% forecast and 10% forecast become effectively zero.
The question I asked Tetlock was broadly:
What do you think Robert Jervis’ response would be to your efforts forecasting long-term existential risk, and how would you respond to the argument that you’ve—at best prematurely—left the Goldilocks zone (which, per your slides, you admit as much) and entered into a world of debilitating (rather than manageable) system effects, perhaps reneging on the implicit bargain struck in your 2012 article responding to Jervis’ System Effects (+ 2012 follow up)?
Now imagine that effect compounded a few times over. The difference between a 85% forecast and 10% forecast become effectively zero.
I think that my answer is something like: Yeah, the difference between a 10% and an 85% becomes zero, but if this is e.g., existential risk per century, the difference doesn’t really matter at the current margin, and both probabilities indicate that we should take many of the same actions.
You’ve won Nuño. I made an account. My comment on the Google Doc was (slightly edited):
Will add other thoughts and comments later.
Muahahahaha
To answer this:
I think that my answer is something like: Yeah, the difference between a 10% and an 85% becomes zero, but if this is e.g., existential risk per century, the difference doesn’t really matter at the current margin, and both probabilities indicate that we should take many of the same actions.