There are many object-level lines of evidence to discuss, but this is not the place for great detail (I recommend Nick Bostrom’s forthcoming book). One of the most information-dense is that that’s surveys sent to the top 100 most-cited individuals in AI (identified using Microsoft’s academic search tool) resulted in a median estimate comfortably within the century, including substantial and non-negligible probability for the next few decades. The results were presented at the Philosophy and Theory of AI conference earlier this year and are on their way to publication.
Expert opinion is not terribly reliable on such questions, and we should probably widen our confidence intervals (extensive research shows that naive individuals give overly narrow intervals), assigning more weight to AI surprisingly soon and surprisingly late than otherwise. We might also try to correct against a possible optimistic bias (which would bias towards shorter timelines and lower risk estimates).
The surveyed experts also assigned credences in very bad or existentially catastrophic outcomes that, if taken literally, would suggest that AI poses the largest existential risk (although some respondents may have interpreted the question to include comparatively lesser harms).
Extinction-level asteroid, volcanoes, and other natural catastrophes are relatively well-characterized and pose extremely low annual risk based on empirical evidence of past events. GiveWell’s shallow analysis pages discuss several of these, and the edited volume “Global Catastrophic Risks” has more on these and others.
Climate scientists and the IPCC have characterized the risk of conditions threatening human extinction as very unlikely conditional on nuclear winter or severe continued carbon emissions, i.e. these are far more likely to cause large economic losses and death than to permanently disrupt human civilization.
Advancing biotechnology may make artificial diseases intentionally engineered to cause human extinction by large and well-resourced biowarfare programs an existential threat, although there is a very large gap between the difficulty of creating a catastrophic pathogen and civilization-ending one.
An FHI survey of experts at an Oxford Global Catastrophic Risks conference asked participants to assign credences to the risk of various levels of harm from different sources in the 21st century, including over 1 billion deaths and extinction. Median estimates assigned greater credence to human extinction from AI than conventional threats including nuclear war or engineered pandemics, but greater credence to casualties of at least 1 billion from the conventional threats.
So the relative importance of AI is greater in terms of existential risk than global catastrophic risk, but seems at least comparable in the latter area as well.
There are many object-level lines of evidence to discuss, but this is not the place for great detail (I recommend Nick Bostrom’s forthcoming book). One of the most information-dense is that that’s surveys sent to the top 100 most-cited individuals in AI (identified using Microsoft’s academic search tool) resulted in a median estimate comfortably within the century, including substantial and non-negligible probability for the next few decades. The results were presented at the Philosophy and Theory of AI conference earlier this year and are on their way to publication.
Expert opinion is not terribly reliable on such questions, and we should probably widen our confidence intervals (extensive research shows that naive individuals give overly narrow intervals), assigning more weight to AI surprisingly soon and surprisingly late than otherwise. We might also try to correct against a possible optimistic bias (which would bias towards shorter timelines and lower risk estimates).
The surveyed experts also assigned credences in very bad or existentially catastrophic outcomes that, if taken literally, would suggest that AI poses the largest existential risk (although some respondents may have interpreted the question to include comparatively lesser harms).
Extinction-level asteroid, volcanoes, and other natural catastrophes are relatively well-characterized and pose extremely low annual risk based on empirical evidence of past events. GiveWell’s shallow analysis pages discuss several of these, and the edited volume “Global Catastrophic Risks” has more on these and others.
Climate scientists and the IPCC have characterized the risk of conditions threatening human extinction as very unlikely conditional on nuclear winter or severe continued carbon emissions, i.e. these are far more likely to cause large economic losses and death than to permanently disrupt human civilization.
Advancing biotechnology may make artificial diseases intentionally engineered to cause human extinction by large and well-resourced biowarfare programs an existential threat, although there is a very large gap between the difficulty of creating a catastrophic pathogen and civilization-ending one.
An FHI survey of experts at an Oxford Global Catastrophic Risks conference asked participants to assign credences to the risk of various levels of harm from different sources in the 21st century, including over 1 billion deaths and extinction. Median estimates assigned greater credence to human extinction from AI than conventional threats including nuclear war or engineered pandemics, but greater credence to casualties of at least 1 billion from the conventional threats.
So the relative importance of AI is greater in terms of existential risk than global catastrophic risk, but seems at least comparable in the latter area as well.