My personal take is that the numbers used are too low, and this matches my sense of the median AI Safety researchers opinion. My personal rough guess would be 25% x-risk conditional on making AGI, and median AGI by 2040, which sharply increase the probability of death from AI to well above natural causes.
I agree that your risk of dying from misaligned AI in an extinction event in the next 30 years is much more than 3.7%. Actually, Carlsmith would too—he more than doubled his credence in AI existential catastrophe by 2070 since sharing the report (see, e.g., the “author’s note” at the end of the arxiv abstract).
Current AI projects will easily accelerate present trends toward a far higher likelihood of existential catastrophic events. Risks are multiplied by the many uncoordinated global AI projects and their various experimental applications, particularly genetic engineering in less fastidiously scientific jurisdictions; but also the many social, political, and military applications of misaligned AI. AI safety work would be well-intentioned but irrelevant as these genies won’t be put back into every AI ‘safety bottle’. Optimistically, as we have survived our existential risks for quite some time, we may yet find a means to survive the Great Filter of Civilizations challenge presented by Fermi’s Paradox.
Good post!
I agree that your risk of dying from misaligned AI in an extinction event in the next 30 years is much more than 3.7%. Actually, Carlsmith would too—he more than doubled his credence in AI existential catastrophe by 2070 since sharing the report (see, e.g., the “author’s note” at the end of the arxiv abstract).
(Edit: modulo mic’s observation, but still.)
Current AI projects will easily accelerate present trends toward a far higher likelihood of existential catastrophic events. Risks are multiplied by the many uncoordinated global AI projects and their various experimental applications, particularly genetic engineering in less fastidiously scientific jurisdictions; but also the many social, political, and military applications of misaligned AI. AI safety work would be well-intentioned but irrelevant as these genies won’t be put back into every AI ‘safety bottle’. Optimistically, as we have survived our existential risks for quite some time, we may yet find a means to survive the Great Filter of Civilizations challenge presented by Fermi’s Paradox.