It seemed to me like the way the prize was presented and constructed was aimed at specifically changing Nick Beckstead’s views without giving much consideration for being universally convincing. Given that he’s stepped down from the Future Fund, why do you think the prize is critical?
Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).
Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won’t happen by default (following e.g. Carlsmith’s report), whereas I share the opposite intuition (as do you I think).
I don’t think EA longtermist grantmakers tend to have p(doom) as low as the numbers in Joe Carlsmith’s report. The thing I wanted you to quantify was “which of Nick Beckstead’s views are we talking about, and what range of probabilities do you think longtermist EA grantmakers tend to have?”.
E.g., my guess would have been that Future Fund staff were a lot more AI-risk-skeptical than longtermist Open Phil staff on average. But if you meant to be making a very weak claim, like “most leading GCR EA grantmakers think p(doom) from AI is below 90%”, then I would agree with you.
Interesting. My prior was that OpenPhil and FF had a similar level of AI-risk-skepticism. But I guess Holden and Ajeya at least seem to have updated toward more urgency recently.
I think this subthread about one person’s beliefs are not that important and may be a distraction.
While the spending of that amount of money now would probably be bad, it’s clear this prize is about informing everyone in EA, and would have had a lot of value about cause prioritization, and truth. This would be valuable to the movement.
It seemed to me like the way the prize was presented and constructed was aimed at specifically changing Nick Beckstead’s views without giving much consideration for being universally convincing. Given that he’s stepped down from the Future Fund, why do you think the prize is critical?
Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).
Could you quantify “to a first approximation”? My sense is that this claim’s truth crucially turns on how much you’re approximating.
Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won’t happen by default (following e.g. Carlsmith’s report), whereas I share the opposite intuition (as do you I think).
I don’t think EA longtermist grantmakers tend to have p(doom) as low as the numbers in Joe Carlsmith’s report. The thing I wanted you to quantify was “which of Nick Beckstead’s views are we talking about, and what range of probabilities do you think longtermist EA grantmakers tend to have?”.
E.g., my guess would have been that Future Fund staff were a lot more AI-risk-skeptical than longtermist Open Phil staff on average. But if you meant to be making a very weak claim, like “most leading GCR EA grantmakers think p(doom) from AI is below 90%”, then I would agree with you.
Interesting. My prior was that OpenPhil and FF had a similar level of AI-risk-skepticism. But I guess Holden and Ajeya at least seem to have updated toward more urgency recently.
I think this subthread about one person’s beliefs are not that important and may be a distraction.
While the spending of that amount of money now would probably be bad, it’s clear this prize is about informing everyone in EA, and would have had a lot of value about cause prioritization, and truth. This would be valuable to the movement.