Hey Sam, thanks for this. I always appreciate the critical, reflective perspective you bring to these discussions. It’s really valuable. I think you’re right that we should consider the failure modes to which we’re vulnerable and consider adopting useful tools from other communities.
I think perhaps it’s a bit premature to dismiss the value of probabilistic predictions and forecasting. One thing missing from this post is discussion of Tetlock’s Expert Political Judgement work. Through the ’90s and ’00s, Tetlockian forecasters went head-to-head against analysts from the US military and intelligence communities and kicked their butts. I think you’re right that, despite this, forecasting hasn’t taken over strategic decisionmaking in these communities. But as far as I know Tetlock has continued to work with and receive funding from intelligence projects, so it seems that the intelligence people see some value in these methods.
I think I’d agree with other commenters too that I’m not sure longtermist grants are that reliant on expected value calculations. The links you provide, e.g. to David Roodman’s recent paper for Open Phil, don’t seem to support this. Roodman’s paper, for example, seems to be more of a test of whether or not the idea of explosive economic acceleration this century is plausible from a historical perspective rather than an attempt to estimate the value of the future. In fact, since Roodman’s paper finds that growth tends to infinity by 2047 it’s not actually helpful in estimating the value of the future.
Instead, it seems to me that most longtermist grantmaking these days relies more on crucial considerations-type analysis that considers the strength of a project’s causal connection to the longterm future (e.g. reducing ex risk).
P.S. If you ever feel that you’re struggling to get your point across I’d be happy to provide light edits on these posts before they go public—just message me here or email me at work (stephen [at] founderspledge [dot] com)
Hey Sam, thanks for this. I always appreciate the critical, reflective perspective you bring to these discussions. It’s really valuable. I think you’re right that we should consider the failure modes to which we’re vulnerable and consider adopting useful tools from other communities.
I think perhaps it’s a bit premature to dismiss the value of probabilistic predictions and forecasting. One thing missing from this post is discussion of Tetlock’s Expert Political Judgement work. Through the ’90s and ’00s, Tetlockian forecasters went head-to-head against analysts from the US military and intelligence communities and kicked their butts. I think you’re right that, despite this, forecasting hasn’t taken over strategic decisionmaking in these communities. But as far as I know Tetlock has continued to work with and receive funding from intelligence projects, so it seems that the intelligence people see some value in these methods.
I think I’d agree with other commenters too that I’m not sure longtermist grants are that reliant on expected value calculations. The links you provide, e.g. to David Roodman’s recent paper for Open Phil, don’t seem to support this. Roodman’s paper, for example, seems to be more of a test of whether or not the idea of explosive economic acceleration this century is plausible from a historical perspective rather than an attempt to estimate the value of the future. In fact, since Roodman’s paper finds that growth tends to infinity by 2047 it’s not actually helpful in estimating the value of the future.
Instead, it seems to me that most longtermist grantmaking these days relies more on crucial considerations-type analysis that considers the strength of a project’s causal connection to the longterm future (e.g. reducing ex risk).
P.S. If you ever feel that you’re struggling to get your point across I’d be happy to provide light edits on these posts before they go public—just message me here or email me at work (stephen [at] founderspledge [dot] com)