Again there doesn’t seem to be a strong reason to think there’s an upper bound to the amount of people that could be killed in a war featuring widespread deployment of AI commanders or lethal autonomous weapons systems.[17]
So on technological grounds, at least, there seem to be no strong reasons to think that the distribution of war outcomes continues all the way to the level of human extinction.
Sounds right!
This made me realise that my post is confusing/miseadling in a particular way—because of the context of the 80,000 Hours problem profiles page, I was thinking of the question like “what’s the leftover x-risk from conflict once you aren’t considering AI, bio, or nukes (since those have their own problem profiles)”? But that context is much stronger in my head than in the readers’, and should be made explicit.
I guess also AI-as-a-weapon should perhaps fall into the great power conflict bucket, as it’s not discussed that much in the AI profile.
Thanks Arden, that makes sense. I think it will be hard to separate “x-risk from conventional war” from “x-risk from war fought with WMDs and autonomous weapons” because pacifying interventions like improving US-China relations would seem to reduce both those risks simultaneously.
Sounds right!
This made me realise that my post is confusing/miseadling in a particular way—because of the context of the 80,000 Hours problem profiles page, I was thinking of the question like “what’s the leftover x-risk from conflict once you aren’t considering AI, bio, or nukes (since those have their own problem profiles)”? But that context is much stronger in my head than in the readers’, and should be made explicit.
I guess also AI-as-a-weapon should perhaps fall into the great power conflict bucket, as it’s not discussed that much in the AI profile.
Thanks Arden, that makes sense. I think it will be hard to separate “x-risk from conventional war” from “x-risk from war fought with WMDs and autonomous weapons” because pacifying interventions like improving US-China relations would seem to reduce both those risks simultaneously.