Thanks for this thoughtful post! I think I stand by my 1 in 10,000 estimate despite this.
A few short reasons:
Broad things: First, these scenarios and scenarios like them are highly conjunctive (many rare things need to happen), which makes any one scenario unlikely (although of course there may be many such scenarios). Second, I think these and similar scenarios are reason to think there may be a large catastrophe, but large and existential are a long way apart. (I discuss this a bit here but don’t come to a strong overall conclusion. More work on this would be great.)
On inducing nuclear war:My estimate of the direct risk of nuclear war is 1 in 10,000, and the indirect risk is 1 in 1,000. It seems like the chances that climate change causes a nuclear war, weighted by the extent to which the war was more likely by virtue of climate change and not e.g. geopolitical tensions unrelated to climate change is, while subjective and difficult to judge, probably much less than 10%. If it’s say 1%, this gives less than 1 in 100,000 indirect x-risk from climate change. This seems a bit small, but consistent with my 1 in 10,000 estimate. Note this includes inducing nuclear war from ways other than crop failure.
On runaway warming: My understanding is that the main limit here is how many fossil fuels it’s possible to recover from the ground—see more here. Even taking into account uncertainty and huge model error, it seems highly unlikely that we’ll end up with runaway warming that itself leads to extinction. I’d also add that lots of the reduction in risk occurs because climate change is a gradual catastrophe (unlike a pandemic or nuclear war), which means that, for example, we may find other emissionless technology (e.g. nuclear fusion) or get over our fear of nuclear fission, etc., reducing the risk of resource depletion. Relatedly, unless there is extremely fast runaway warming over only a few years, the gradual nature of climate change increases the chances of successful adaptation to a warmer environment. (Again, I mean adaptation to prevent an existential catastrophe—a large catastrophe that isn’t quite existential seems far far more likely.)
On coastal cities: I’d guess the existential risk from war breaking out between great powers is also around 1 in 10,000 (within an order of magnitude or so), although I’ve thought about this less. So again, while cyanobacteria blooms sounds like a not-impossible way in which climate change could lead to war (personally I’d be more worried about flooding and migration crises in South Asia), I think this is all consistent with my 1 in 10,000 estimate.
If it helps at all, my subjective estimate of the risk from AI is probably around 1%, and approximately none of that comes from worrying about killer nanobots. I wrote about what an AI-caused existential catastrophe might actually look like here.
Thanks for this thoughtful post! I think I stand by my 1 in 10,000 estimate despite this.
A few short reasons:
Broad things: First, these scenarios and scenarios like them are highly conjunctive (many rare things need to happen), which makes any one scenario unlikely (although of course there may be many such scenarios). Second, I think these and similar scenarios are reason to think there may be a large catastrophe, but large and existential are a long way apart. (I discuss this a bit here but don’t come to a strong overall conclusion. More work on this would be great.)
On inducing nuclear war: My estimate of the direct risk of nuclear war is 1 in 10,000, and the indirect risk is 1 in 1,000. It seems like the chances that climate change causes a nuclear war, weighted by the extent to which the war was more likely by virtue of climate change and not e.g. geopolitical tensions unrelated to climate change is, while subjective and difficult to judge, probably much less than 10%. If it’s say 1%, this gives less than 1 in 100,000 indirect x-risk from climate change. This seems a bit small, but consistent with my 1 in 10,000 estimate. Note this includes inducing nuclear war from ways other than crop failure.
On runaway warming: My understanding is that the main limit here is how many fossil fuels it’s possible to recover from the ground—see more here. Even taking into account uncertainty and huge model error, it seems highly unlikely that we’ll end up with runaway warming that itself leads to extinction. I’d also add that lots of the reduction in risk occurs because climate change is a gradual catastrophe (unlike a pandemic or nuclear war), which means that, for example, we may find other emissionless technology (e.g. nuclear fusion) or get over our fear of nuclear fission, etc., reducing the risk of resource depletion. Relatedly, unless there is extremely fast runaway warming over only a few years, the gradual nature of climate change increases the chances of successful adaptation to a warmer environment. (Again, I mean adaptation to prevent an existential catastrophe—a large catastrophe that isn’t quite existential seems far far more likely.)
On coastal cities: I’d guess the existential risk from war breaking out between great powers is also around 1 in 10,000 (within an order of magnitude or so), although I’ve thought about this less. So again, while cyanobacteria blooms sounds like a not-impossible way in which climate change could lead to war (personally I’d be more worried about flooding and migration crises in South Asia), I think this is all consistent with my 1 in 10,000 estimate.
If it helps at all, my subjective estimate of the risk from AI is probably around 1%, and approximately none of that comes from worrying about killer nanobots. I wrote about what an AI-caused existential catastrophe might actually look like here.