With nukes, I do share the view that they could plausibly kill everyone. If thereās a nuclear war, followed by nuclear winter, and everyone dies during that winter, rather than most people dying and then the rest succumbing 10 years later from something else or never recovering, Iād consider that nuclear war causing 100% deaths.
My point was instead that that really couldnāt have happened in 1945. So there was one nuke, and a couple explosions, and gradually more nukes and test explosions, etc., before there was a present risk of 100% of people dying from this source. So we did see something like āmini-versionsāāHiroshima and Nagasaki, test explosions, Cuban Missile Crisisābefore we saw 100% (which indeed, we still havenāt and hopefully wonāt).
With climate change, weāre already seeing mini-versions. I do think itās plausible that there could be a relatively sudden jump due to amplifying feedback loops. But ārelatively suddenā might mean over months or years or something like that. And it wouldnāt be a total bolt from the blue in any caseāthe damage is already accruing and increasing, and likely would do in the lead up to such tail risks.
AI, physics risks, and nanotech are all plausible cases where thereād be a sudden jump. And Iām very concerned about AI and somewhat about nanotech. But note that we donāt actually have clear evidence that those things could cause such sudden jumps. I obviously donāt think we should wait for such evidence, because if it came weād be dead. But it just seems worth remembering that before using āHypothesis X predicts no sudden jump in destruction from Yā as an argument against hypothesis X.
Also, as I mentioned in my other comment, Iām now thinking maybe the best way to look at that is specific arguments in the case of AI, physics risks, and nanotech updating us away from the generally useful prior that weāll see small things before extreme versions of the same things.
With nukes, I do share the view that they could plausibly kill everyone. If thereās a nuclear war, followed by nuclear winter, and everyone dies during that winter, rather than most people dying and then the rest succumbing 10 years later from something else or never recovering, Iād consider that nuclear war causing 100% deaths.
My point was instead that that really couldnāt have happened in 1945. So there was one nuke, and a couple explosions, and gradually more nukes and test explosions, etc., before there was a present risk of 100% of people dying from this source. So we did see something like āmini-versionsāāHiroshima and Nagasaki, test explosions, Cuban Missile Crisisābefore we saw 100% (which indeed, we still havenāt and hopefully wonāt).
With climate change, weāre already seeing mini-versions. I do think itās plausible that there could be a relatively sudden jump due to amplifying feedback loops. But ārelatively suddenā might mean over months or years or something like that. And it wouldnāt be a total bolt from the blue in any caseāthe damage is already accruing and increasing, and likely would do in the lead up to such tail risks.
AI, physics risks, and nanotech are all plausible cases where thereād be a sudden jump. And Iām very concerned about AI and somewhat about nanotech. But note that we donāt actually have clear evidence that those things could cause such sudden jumps. I obviously donāt think we should wait for such evidence, because if it came weād be dead. But it just seems worth remembering that before using āHypothesis X predicts no sudden jump in destruction from Yā as an argument against hypothesis X.
Also, as I mentioned in my other comment, Iām now thinking maybe the best way to look at that is specific arguments in the case of AI, physics risks, and nanotech updating us away from the generally useful prior that weāll see small things before extreme versions of the same things.