Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).
Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won’t happen by default (following e.g. Carlsmith’s report), whereas I share the opposite intuition (as do you I think).
I don’t think EA longtermist grantmakers tend to have p(doom) as low as the numbers in Joe Carlsmith’s report. The thing I wanted you to quantify was “which of Nick Beckstead’s views are we talking about, and what range of probabilities do you think longtermist EA grantmakers tend to have?”.
E.g., my guess would have been that Future Fund staff were a lot more AI-risk-skeptical than longtermist Open Phil staff on average. But if you meant to be making a very weak claim, like “most leading GCR EA grantmakers think p(doom) from AI is below 90%”, then I would agree with you.
Interesting. My prior was that OpenPhil and FF had a similar level of AI-risk-skepticism. But I guess Holden and Ajeya at least seem to have updated toward more urgency recently.
Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).
Could you quantify “to a first approximation”? My sense is that this claim’s truth crucially turns on how much you’re approximating.
Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won’t happen by default (following e.g. Carlsmith’s report), whereas I share the opposite intuition (as do you I think).
I don’t think EA longtermist grantmakers tend to have p(doom) as low as the numbers in Joe Carlsmith’s report. The thing I wanted you to quantify was “which of Nick Beckstead’s views are we talking about, and what range of probabilities do you think longtermist EA grantmakers tend to have?”.
E.g., my guess would have been that Future Fund staff were a lot more AI-risk-skeptical than longtermist Open Phil staff on average. But if you meant to be making a very weak claim, like “most leading GCR EA grantmakers think p(doom) from AI is below 90%”, then I would agree with you.
Interesting. My prior was that OpenPhil and FF had a similar level of AI-risk-skepticism. But I guess Holden and Ajeya at least seem to have updated toward more urgency recently.