I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.
That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.
This matters to me for two reasons.
First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.
Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.
I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.
That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.
This matters to me for two reasons.
First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.
Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.