Thanks for the post, this is definitely a valuable framing.
But I’m a bit concerned that the post creates a misleading impression that the whole catastrophic/ speculative risk field is completely overwhelmed by AI x-risk.
Assuming you don’t believe that other catastrophic risks are completely negligible compared to AI x-risk, I’d recommend adding a caveat that this is only comparing AI x-risk and existing/ non-speculative risks. If you do think AI x-risk overwhelms other catastrophic risks, you should probably mention that too.
I didn’t vote on your comment on either scale, but FWIW my guess is that the disagreement is due to quite a few people having the view that AI x-risk does swamp everything else.
I suspected that, but it didn’t seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece.
Although he says that he’s more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates.
I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez’ estimates) (0.39% annual chance of US-Russia nuclear exchange, 50% chance of a Brit dying in it; I know some EAs have made much lower estimates, but this seems in line with the general consensus). In this model, nuclear risk comes out a bit higher than ‘natural’ over 30 years.
Even if you’re particularly optimistic about other GCRs, if you add all the other potential catastrophic/ speculative risks together (pandemics, non-existential AI risk, nuclear, nano, other), I can’t imagine them not shifting the model.
Huh, I appreciate you actually putting numbers on this! I was suprised at nuclear risk numbers being remotely competitive with natural causes (let alone significantly dominating over the next 20 years), and I take this as an at least mild downwards update on AI dominating all other risks (on a purely personal level). Probably I had incorrect cached thoughts from people exclusively discussing extinction risk rather than just catastrophic risks, but from a purely personal perspective this distinction matters much less.
Thanks for the post, this is definitely a valuable framing.
But I’m a bit concerned that the post creates a misleading impression that the whole catastrophic/ speculative risk field is completely overwhelmed by AI x-risk.
Assuming you don’t believe that other catastrophic risks are completely negligible compared to AI x-risk, I’d recommend adding a caveat that this is only comparing AI x-risk and existing/ non-speculative risks. If you do think AI x-risk overwhelms other catastrophic risks, you should probably mention that too.
Wow, lots of disagreement points, I’m curious what people disagree with.
I didn’t vote on your comment on either scale, but FWIW my guess is that the disagreement is due to quite a few people having the view that AI x-risk does swamp everything else.
I suspected that, but it didn’t seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece.
Although he says that he’s more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates.
I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez’ estimates) (0.39% annual chance of US-Russia nuclear exchange, 50% chance of a Brit dying in it; I know some EAs have made much lower estimates, but this seems in line with the general consensus). In this model, nuclear risk comes out a bit higher than ‘natural’ over 30 years.
Even if you’re particularly optimistic about other GCRs, if you add all the other potential catastrophic/ speculative risks together (pandemics, non-existential AI risk, nuclear, nano, other), I can’t imagine them not shifting the model.
Huh, I appreciate you actually putting numbers on this! I was suprised at nuclear risk numbers being remotely competitive with natural causes (let alone significantly dominating over the next 20 years), and I take this as an at least mild downwards update on AI dominating all other risks (on a purely personal level). Probably I had incorrect cached thoughts from people exclusively discussing extinction risk rather than just catastrophic risks, but from a purely personal perspective this distinction matters much less.
EDIT: Added a caveat to the post accordingly
I thought it might be that people simply didn’t find the chart misleading, they thought it was clear enough and didn’t need any more caveats.