I agree with the broad argument here, but it seems substantially understated to me, for a few reasons:
A priori, I expect a Pareto distribution in magnitude of catastrophes, such that the vast majority of expectation of lost lives would lie in non-existential catastrophes. AI could distort this to some degree, but I would need to be far more convinced of the inside view arguments to believe it’s going to dominate the outside view distribution.
AI development is a one-off risk, where at some point it will either kill us, fix everything, or the world will continue in a new equilibrium. Nukes and other advanced weaponry will basically be a threat for the whole of the future of technological civilisation, not just the next century, so any risk assessment that only looks that far is underrepresenting them.
The source for your probability estimate for nuclear war is very ambiguous and seems to seriously understate the expected number of lives lost. The 80k article links to Luisa’s post, which cites two very inconsistent estimates to which 80k’s number might refer: ‘1) a nuclear attack by a state actor and 2) a nuclear attack by a state actor in Russia, which is 0.03%, or 0.01% per year (unpublished GJI data from Open Philanthropy Project; Apps, 2015).’ (which is a very specific subset of nuclear war scenarios); and ‘[the respondents to the 2008 Global Catastrophic Risk expert survey] see the risk of extinction caused by nuclear war as … about 0.011% per year.’ In any case, clearly the overall annual risk of <a nuclear war> should be much higher than annual risk of either <state launches nuke and nuke kills at least 1 person in Russia> or <extinction by nuclear war>.
Looking at the survey in question, it looks as though the plurality of expected deaths come from ‘non-extinction from all wars’, with the next biggest expectations from (depending on what ‘at least 1 billion’ pans out to being shared approximately equally between ‘non-extinction from nanotech’, ‘extinction from nanotech’, ‘extinction from AI’, ‘non-extinction from biopandemic’, and ‘non-extinction from nuclear wars’.
I agree with the broad argument here, but it seems substantially understated to me, for a few reasons:
A priori, I expect a Pareto distribution in magnitude of catastrophes, such that the vast majority of expectation of lost lives would lie in non-existential catastrophes. AI could distort this to some degree, but I would need to be far more convinced of the inside view arguments to believe it’s going to dominate the outside view distribution.
AI development is a one-off risk, where at some point it will either kill us, fix everything, or the world will continue in a new equilibrium. Nukes and other advanced weaponry will basically be a threat for the whole of the future of technological civilisation, not just the next century, so any risk assessment that only looks that far is underrepresenting them.
The source for your probability estimate for nuclear war is very ambiguous and seems to seriously understate the expected number of lives lost. The 80k article links to Luisa’s post, which cites two very inconsistent estimates to which 80k’s number might refer: ‘1) a nuclear attack by a state actor and 2) a nuclear attack by a state actor in Russia, which is 0.03%, or 0.01% per year (unpublished GJI data from Open Philanthropy Project; Apps, 2015).’ (which is a very specific subset of nuclear war scenarios); and ‘[the respondents to the 2008 Global Catastrophic Risk expert survey] see the risk of extinction caused by nuclear war as … about 0.011% per year.’ In any case, clearly the overall annual risk of <a nuclear war> should be much higher than annual risk of either <state launches nuke and nuke kills at least 1 person in Russia> or <extinction by nuclear war>.
Looking at the survey in question, it looks as though the plurality of expected deaths come from ‘non-extinction from all wars’, with the next biggest expectations from (depending on what ‘at least 1 billion’ pans out to being shared approximately equally between ‘non-extinction from nanotech’, ‘extinction from nanotech’, ‘extinction from AI’, ‘non-extinction from biopandemic’, and ‘non-extinction from nuclear wars’.