I have not read much of Tetlock’s research, so I could be mistaken, but isn’t the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts? Over this timescale, I would’ve expected forecasting to be very useful for non-EA actors, so the central puzzle remains. Indeed, if there is not evidence for long-term forecasting, then wouldn’t one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
Of course, it would be hard to gather evidence for forecasting working well over longer (say, 10+ year) forecasts, so perhaps I’m expecting too much evidence. But it’s not clear to me that we should have strong theoretical reasons to think that this style of forecasting would work particularly well, given how “cloud-like” predicting events over long time horizons is and how with further extrapolation there might be more room for bias.
It seems that your original comment no longer holds under this version of “1% better”, no? In what way does being 1% better at all these skills translate to being 30x better over a year? How do we even aggregate these 1% improvements under the new definition?
Anyway, even under this definition it seems hard to keep finding skills that one can get 1% better at within one day easily. At some point you would probably run into diminishing returns across skills—that is, the “low-hanging fruit” of skills you can improve at easily will have been picked.