Examining pathways through which narrow AI systems might increase the likelihood of nuclear war

This is a linkpost for rethinkpriorities.org/​​longtermism-research-notes/​​examining-pathways-through-which-narrow-ai-systems-might-increase-the-likelihood-of-nuclear-war

The goal of this post is to sketch out potential research projects about the possible pathways through which narrow AI systems might increase the likelihood of nuclear war.[1]

The post is mostly based on reading that I did in February-March 2022. I wrote the basic ideas in August 2022, and made quick edits in May 2023, largely in response to comments that I received on an unpublished draft. It is now very unlikely that I will do more research based on the ideas here. So, I am sharing the sketch with the hope that it might be helpful to other people doing research to reduce risks from nuclear weapons and/​or AI.[2]

Since my research here is mostly a year old, I unfortunately do not engage here with the newest work about the topic.[3] Indeed, I expect that some or many of the points that I make here will have been made in recent published work by others. I’m sharing my notes anyway, however, since it seems quick to do so, and in case my notes are able to add any additional value to the existing literature — e.g. by highlighting potential future research projects.

Please note that this is a blog post, not a research report, meaning that it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness, careful checking for accuracy, and reasoning transparency.

Please also note that many topics in nuclear security are sensitive, and so people working on this area should be cautious about what they publish. Before publishing, I ran this post by a couple of people whom I expect to be knowledgeable and careful about infohazards related to nuclear security. If you do research based on this post and are not sure whether your work would be fine to share, please feel free to reach out to me and I’d be happy to give some thoughts.

Summary

There seem to be four main possible pathways discussed in existing literature by which narrow AI systems might increase nuclear risk. I consider each in turn:

  1. How much additional nuclear risk is there from reduced second-strike capability? This refers to narrow AI systems reducing the ability of a country to launch a “second” nuclear strike as retaliation against a “first” nuclear strike against that country by someone else. This might increase the likelihood of nuclear war by weakening nuclear deterrence.

  2. How much risk from delegating nuclear launch authority? In this pathway, decisions about launching nuclear weapons are (partly) delegated to AI. Intrinsic weaknesses with current AI approaches — in particular brittleness — mean that the AI may launch when we would not want it to.[4]

  3. How much risk from a degraded information environment? In this pathway, narrow AI technologies reduce the ability for leaders to make informed decisions about nuclear launches. This may increase the likelihood that leaders launch nuclear weapons when they would not do so if they had a full picture of what’s going on.

  4. How much risk from lethal autonomous weapons (LAWs)? In this pathway, there is an incident with conventionally-armed lethal autonomous weapons that escalates to nuclear war. Alternatively, LAWs change the balance of conventional military power, making the countries that are now much weaker more tempted to resort to using nuclear weapons.

For each pathway, I list some questions that would need to be answered to understand how plausible/​big the effect is. I sometimes also give brief arguments for why I think the expected effect is overrated (at least by the people who propose it).

Click here for the full version of this post on the Rethink Priorities website.

Acknowledgments

This blog post is a project of Rethink Priorities. It was written by Oliver Guest.

Thank you to Matthijs Maas for originally pointing me towards a lot of helpful existing work, and to Patrick Levermore for feedback on an earlier draft of this post. They do not necessarily endorse the points that I make here.

If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.

  1. ^

    The project could be expanded to include ways in which narrow AI systems might decrease the likelihood of nuclear war. See, for example, this journal article.

  2. ^

    Because I am sharing quickly, this post unfortunately has more jargon and unreferenced empirical claims than I would like.

  3. ^

    Recent research that I am aware of, but have unfortunately not had much opportunity to engage with, include:

    Maas, M. M., Matteucci, K., & Cooke, D. (2022). Military Artificial Intelligence as Contributor to Global Catastrophic Risk. SSRN Electronic Journal. https://​​doi.org/​​10.2139/​​ssrn.4115010 (Forum post here)

    Nadibaidze, A., & Miotto, N. (2023). The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses. Journal for Peace and Nuclear Disarmament, 1–21. https://​​doi.org/​​10.1080/​​25751654.2023.2205552

    Rautenbach, P. (2022, November 18). Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration. EA Forum. https://​​forum.effectivealtruism.org/​​posts/​​BGFk3fZF36i7kpwWM/​​artificial-intelligence-and-nuclear-command-control-and-1

    Ruhl, C. (2022). Autonomous weapons systems & military AI. Founders Pledge. https://​​founderspledge.com/​​stories/​​autonomous-weapon-systems-and-military-artificial-intelligence-ai (Forum post here)

  4. ^

    I use “brittleness” to mean an AI system’s inability to adapt to novel situations or handle edge cases it has not encountered during training.