Matthijs Maas
Senior Research Fellow, Institute for Law & AI
Research Affiliate, Leverhulme Centre for the Future of Intelligence, University of Cambridge.
https://www.matthijsmaas.com/ | https://linktr.ee/matthijsmaas
Matthijs Maas
Senior Research Fellow, Institute for Law & AI
Research Affiliate, Leverhulme Centre for the Future of Intelligence, University of Cambridge.
https://www.matthijsmaas.com/ | https://linktr.ee/matthijsmaas
Some of these (hazard, vulnerability, exposure) are discussed in the context of x-risks in this typology: https://www.sciencedirect.com/science/article/abs/pii/S0016328717301623 [open-access at https://www.researchgate.net/publication/324688255_Governing_Boring_Apocalypses_A_New_Typology_of_Existential_Vulnerabilities_and_Exposures_for_Existential_Risk_Research ]
NC3 early warning systems are susceptible to error signals, and the chain of command hasn’t always been v secure (and may not be today), so it wouldn’t necessarily be that hard for a relatively unsophisticated AGI to spoof and trigger a nuclear war:* certainly easier than many other avenues that would involve cracking scientific problems.
(*which is another thing from hacking to the level of “controlling” the arsenal and being able to retarget it at will, which would probably require a more advanced capability, where the risk from the nuclear avenue might perhaps be redundant compared to risks from other, direct avenues).
Incidentally, at CSER I’ve been working with co-authors on a draft chapter that explores “military AI as cause or compounder of global catastrophic risk”, and one of the avenues also involves discussion of what we call “weapons/arsenal overhang”, so this is an interesting topic that I’d love to discuss more