I’d be curious to know if there are others who have worked as hard on estimating any of these probabilities and how close their estimates are to his.
I definitely share this curiosity. In a footnote, I link to this 2008 “informal survey” that’s the closest thing I’m aware of (in the sense of being somewhat comprehensive). It’s a little hard to compare the estimate, as that was for extinction (or sub-extinction events) rather than existential catastrophe more generally, and was for before 2100 rather than before 2120. But it seems to be overall somewhat more pessimistic than Ord, though in roughly the same ballpark for “overall/total risk”, AI, and engineered pandemics at least.
I don’t off the top of my head know anything comparable in terms of amount of effort, except in the case of individual AI researchers estimating the risks from AI, or specific types of AI catastrophe—nothing broader. Or maybe a couple 80k problem profiles. And I haven’t seen these collected anywhere—I think it could be cool if someone did that (and made sure the collection prominently warned against anchoring etc.).
A related and interesting question would be “If we do find past or future estimates based on as much hard work, and find that they’re similar to Ord’s, what do we make of this observation?” It could be taken as strengthening the case for those estimates being “about right”. But it could also be evidence of anchoring or information cascades. We’d want to know how independent the estimates were. (It’s worth noting that the 2008 survey was from FHI, where Ord works.)
Update: I’m now creating this sort of a collection of estimates, partly inspired by this comment thread (so thanks, MichaelStJules!). I’m not yet sure if I’ll publish them; I think collecting a diversity of views together will reduce rather than exacerbate information cascades and such, but I’m not sure. I’m also not sure when I’d publish, if I do publish.
But I think the answers are “probably” and “within a few weeks”.
If anyone happens to know of something like this that already exists, and/or has thoughts on whether publishing something like this would be valuable or detrimental, please let me know :)
Update #2: This turned into a database of existential estimates, and a post with some broader discussion of the idea of making, using, and collecting such estimates. And it’s now posted.
So thanks for (probably accidentally) prompting this!
I definitely share this curiosity. In a footnote, I link to this 2008 “informal survey” that’s the closest thing I’m aware of (in the sense of being somewhat comprehensive). It’s a little hard to compare the estimate, as that was for extinction (or sub-extinction events) rather than existential catastrophe more generally, and was for before 2100 rather than before 2120. But it seems to be overall somewhat more pessimistic than Ord, though in roughly the same ballpark for “overall/total risk”, AI, and engineered pandemics at least.
I don’t off the top of my head know anything comparable in terms of amount of effort, except in the case of individual AI researchers estimating the risks from AI, or specific types of AI catastrophe—nothing broader. Or maybe a couple 80k problem profiles. And I haven’t seen these collected anywhere—I think it could be cool if someone did that (and made sure the collection prominently warned against anchoring etc.).
A related and interesting question would be “If we do find past or future estimates based on as much hard work, and find that they’re similar to Ord’s, what do we make of this observation?” It could be taken as strengthening the case for those estimates being “about right”. But it could also be evidence of anchoring or information cascades. We’d want to know how independent the estimates were. (It’s worth noting that the 2008 survey was from FHI, where Ord works.)
Update: I’m now creating this sort of a collection of estimates, partly inspired by this comment thread (so thanks, MichaelStJules!). I’m not yet sure if I’ll publish them; I think collecting a diversity of views together will reduce rather than exacerbate information cascades and such, but I’m not sure. I’m also not sure when I’d publish, if I do publish.
But I think the answers are “probably” and “within a few weeks”.
If anyone happens to know of something like this that already exists, and/or has thoughts on whether publishing something like this would be valuable or detrimental, please let me know :)
Update #2: This turned into a database of existential estimates, and a post with some broader discussion of the idea of making, using, and collecting such estimates. And it’s now posted.
So thanks for (probably accidentally) prompting this!