Iād be curious to know if there are others who have worked as hard on estimating any of these probabilities and how close their estimates are to his.
I definitely share this curiosity. In a footnote, I link to this 2008 āinformal surveyā thatās the closest thing Iām aware of (in the sense of being somewhat comprehensive). Itās a little hard to compare the estimate, as that was for extinction (or sub-extinction events) rather than existential catastrophe more generally, and was for before 2100 rather than before 2120. But it seems to be overall somewhat more pessimistic than Ord, though in roughly the same ballpark for āoverall/ātotal riskā, AI, and engineered pandemics at least.
I donāt off the top of my head know anything comparable in terms of amount of effort, except in the case of individual AI researchers estimating the risks from AI, or specific types of AI catastropheānothing broader. Or maybe a couple 80k problem profiles. And I havenāt seen these collected anywhereāI think it could be cool if someone did that (and made sure the collection prominently warned against anchoring etc.).
A related and interesting question would be āIf we do find past or future estimates based on as much hard work, and find that theyāre similar to Ordās, what do we make of this observation?ā It could be taken as strengthening the case for those estimates being āabout rightā. But it could also be evidence of anchoring or information cascades. Weād want to know how independent the estimates were. (Itās worth noting that the 2008 survey was from FHI, where Ord works.)
Update: Iām now creating this sort of a collection of estimates, partly inspired by this comment thread (so thanks, MichaelStJules!). Iām not yet sure if Iāll publish them; I think collecting a diversity of views together will reduce rather than exacerbate information cascades and such, but Iām not sure. Iām also not sure when Iād publish, if I do publish.
But I think the answers are āprobablyā and āwithin a few weeksā.
If anyone happens to know of something like this that already exists, and/āor has thoughts on whether publishing something like this would be valuable or detrimental, please let me know :)
Update #2: This turned into a database of existential estimates, and a post with some broader discussion of the idea of making, using, and collecting such estimates. And itās now posted.
So thanks for (probably accidentally) prompting this!
I definitely share this curiosity. In a footnote, I link to this 2008 āinformal surveyā thatās the closest thing Iām aware of (in the sense of being somewhat comprehensive). Itās a little hard to compare the estimate, as that was for extinction (or sub-extinction events) rather than existential catastrophe more generally, and was for before 2100 rather than before 2120. But it seems to be overall somewhat more pessimistic than Ord, though in roughly the same ballpark for āoverall/ātotal riskā, AI, and engineered pandemics at least.
I donāt off the top of my head know anything comparable in terms of amount of effort, except in the case of individual AI researchers estimating the risks from AI, or specific types of AI catastropheānothing broader. Or maybe a couple 80k problem profiles. And I havenāt seen these collected anywhereāI think it could be cool if someone did that (and made sure the collection prominently warned against anchoring etc.).
A related and interesting question would be āIf we do find past or future estimates based on as much hard work, and find that theyāre similar to Ordās, what do we make of this observation?ā It could be taken as strengthening the case for those estimates being āabout rightā. But it could also be evidence of anchoring or information cascades. Weād want to know how independent the estimates were. (Itās worth noting that the 2008 survey was from FHI, where Ord works.)
Update: Iām now creating this sort of a collection of estimates, partly inspired by this comment thread (so thanks, MichaelStJules!). Iām not yet sure if Iāll publish them; I think collecting a diversity of views together will reduce rather than exacerbate information cascades and such, but Iām not sure. Iām also not sure when Iād publish, if I do publish.
But I think the answers are āprobablyā and āwithin a few weeksā.
If anyone happens to know of something like this that already exists, and/āor has thoughts on whether publishing something like this would be valuable or detrimental, please let me know :)
Update #2: This turned into a database of existential estimates, and a post with some broader discussion of the idea of making, using, and collecting such estimates. And itās now posted.
So thanks for (probably accidentally) prompting this!