Toby Ord’s existential risk estimates in The Precipice were for risk this century (by 2100) IIRC. That book was very influential in x-risk circles around the time it came out, so I have a vague sense that people were accepting his framing and giving their own numbers, though I’m not sure quite how common that was. But these days most people talking about p(doom) probably haven’t read The Precipice, given how mainstream that phrase has become.
Also, in some classic hard-takeoff + decisive-strategic-advantage scenarios, p(doom) in the few years after AGI would be close to p(doom) in general, so these distinctions don’t matter that much. But nowadays I think people are worried about a much greater diversity of threat models.
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like “current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
Toby Ord’s existential risk estimates in The Precipice were for risk this century (by 2100) IIRC. That book was very influential in x-risk circles around the time it came out, so I have a vague sense that people were accepting his framing and giving their own numbers, though I’m not sure quite how common that was. But these days most people talking about p(doom) probably haven’t read The Precipice, given how mainstream that phrase has become.
Also, in some classic hard-takeoff + decisive-strategic-advantage scenarios, p(doom) in the few years after AGI would be close to p(doom) in general, so these distinctions don’t matter that much. But nowadays I think people are worried about a much greater diversity of threat models.
Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like “current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).