That’s reasonable. I guess from my perspective, I think the top EA grantmakers need persuading that p(doom|AGI) is significantly greater than 35%. If OpenPhil already think this, then that’s great, but if they don’t (and their probabilites are similar to the Future Fund’s), then the Worldview prize is very important. Even if your probabilities are the same, or much lower, it’s still very high Value of Information imo.
In the survey I did last year, four Open Phil staff respectively gave probability 0.5, 0.5, 0.35, and 0.06 to “the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended”.
That’s just four people, and isn’t necessarily representative of the rest of longtermist Open Phil, but it at least shows that “higher than 35%” isn’t an unrepresented view there.
Ajeya Cotra’s median guess is that AGI is 18 years away; the last time I talked to a MIRI person, their median guess was 14 years. So the Cotra and MIRI camps seem super close to me in timelines (though you can find plenty of individuals whose median year is not in the 2036-2040 range).
If you look at (e.g.) animal welfare EAs vs. AI risk EAs, I expect a much larger gap in timeline beliefs.
That’s reasonable. I guess from my perspective, I think the top EA grantmakers need persuading that p(doom|AGI) is significantly greater than 35%. If OpenPhil already think this, then that’s great, but if they don’t (and their probabilites are similar to the Future Fund’s), then the Worldview prize is very important. Even if your probabilities are the same, or much lower, it’s still very high Value of Information imo.
In the survey I did last year, four Open Phil staff respectively gave probability 0.5, 0.5, 0.35, and 0.06 to “the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended”.
That’s just four people, and isn’t necessarily representative of the rest of longtermist Open Phil, but it at least shows that “higher than 35%” isn’t an unrepresented view there.
Interesting, thanks. What about short timelines? (p(AGI by 2043) in Future Fund Worldview Prize terms)
Ajeya Cotra’s median guess is that AGI is 18 years away; the last time I talked to a MIRI person, their median guess was 14 years. So the Cotra and MIRI camps seem super close to me in timelines (though you can find plenty of individuals whose median year is not in the 2036-2040 range).
If you look at (e.g.) animal welfare EAs vs. AI risk EAs, I expect a much larger gap in timeline beliefs.