My impression is that this is on the low end, relative to estimates that other people in the long-termist AI safety/governance community would give, but that it’s not uniquely low.
Your estimate is the second lowest one I’ve come across, with the lower one being from someone (James Fodor) who I don’t think is in the longtermist AI safety/governance community (though they’re an EA and engage with longtermist thinking). But I’m only talking about the relatively small number of explicit, public estimates people have given, not all the estimates that relevant people would give, so I’d guess that your statement is accurate.
(Also, to be clear, I don’t mean to be imply we should be more skeptical of estimates that “stand out from the pack” than those that are closer to other estimates.)
I’m curious as to whether most of that .1-1% probability mass is on existential catastrophe via something like the classic Bostrom/Yudkowsky type scenario, vs something like what Christiano describes in What failure looks like, vs deliberate misuse of AI, vs something else? E.g., is it like you still see the classic scenarios as the biggest cause for concern here? Or is it like you now see those scenarios as extremely unlikely, yet have a residual sense that something as massive as AGI could cause massive bad consequences somehow?
Thanks for sharing your probability estimate; I’ve now added it to my database of existential risk estimates.
Your estimate is the second lowest one I’ve come across, with the lower one being from someone (James Fodor) who I don’t think is in the longtermist AI safety/governance community (though they’re an EA and engage with longtermist thinking). But I’m only talking about the relatively small number of explicit, public estimates people have given, not all the estimates that relevant people would give, so I’d guess that your statement is accurate.
(Also, to be clear, I don’t mean to be imply we should be more skeptical of estimates that “stand out from the pack” than those that are closer to other estimates.)
I’m curious as to whether most of that .1-1% probability mass is on existential catastrophe via something like the classic Bostrom/Yudkowsky type scenario, vs something like what Christiano describes in What failure looks like, vs deliberate misuse of AI, vs something else? E.g., is it like you still see the classic scenarios as the biggest cause for concern here? Or is it like you now see those scenarios as extremely unlikely, yet have a residual sense that something as massive as AGI could cause massive bad consequences somehow?