Great Power Conflict

No longer endorsed.

Imagine it’s 2030 or 2040 and there’s a catastrophic great power conflict. What caused it? Probably AI and emerging technology, directly or indirectly. But how?

I’ve found almost nothing written on this. In particular, the relevant 80K and EA Forum pages don’t seem to have relevant links. If you know of work on how AI might cause great power conflict, please let me know. For now, I’ll start brainstorming. Specifically:

  1. How could great power conflict affect the long-term future? (I am very uncertain.)

  2. What could cause great power conflict? (I list some possible scenarios.[1])

  3. What factors increase the risk of those scenarios? (I list some plausible factors.)

Epistemic status: brainstorm; not sure about framing or details.

I. Effects

Alternative formulations are encouraged; thinking about risks from different perspectives can help highlight different aspects of those risks. But here’s how I think of this risk:

Emerging technology enables one or more powerful actors (presumably states) to produce civilization-devastating harms, and they do so (either because they are incentivized to or because their decisionmaking processes fail to respond to their incentives).[2]

Significant (in expectation) effects of great power conflict on the long-term future include:

  • Risk of human extinction

  • Risk of civilizational collapse

  • Effects on states’ relative power

  • Other effects on the time until superintelligence and the environment in which we achieve superintelligence

Human extinction would be bad. Civilizational collapse would be prima facie bad, but its long-term consequences are very unclear. Effects on relative power are difficult to evaluate in advance. Overall, the long-term consequences of great power conflict are difficult to evaluate because it is unclear what technological progress and AI safety look like in a post-collapse world or in a post-conflict, no-collapse world.

Current military capabilities don’t seem to pose a direct existential risk. More concerning for the long-term future are future military technologies and side effects of conflict, such as on AI development.

II. Causes

How could AI and the technology it enables lead to great power conflict? Here are the scenarios that I imagine, for great powers called “Albania” and “Botswana”:

  • Intentional conflict due to bilateral tension. In each of these scenarios, international hostility and fear are greater than in 2021, and domestic politics and international relations are more confusing and chaotic.

    • Preventive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.

    • Seizing opportunity. An arms race is in progress. Albania thinks it has an opportunity to get ahead. Albania attempts to strike or sabotage Botswana’s AI program or its military. Albania does not disable Botswana’s military (either because it failed to or because it assumed Botswana would not launch a major counterattack anyway). Botswana retaliates.

    • Diplomatic breakdown. Albania makes a demand or draws a line in the sand (legitimately, from its perspective). Botswana ignores it (legitimately, from its perspective). Albania attacks. Possible demands include, among others: stop building huge AI systems (and submit to external verification), or stop developing technology that threatens a safe first strike (and submit to external verification).

  • Intentional conflict due to a single state’s domestic political forces. These scenarios are currently difficult to imagine among great powers. But some researchers are worried about polarization and epistemic decline in the near future, which could increase this risk.

    • Ambition. Albania hopes to dominate other states. Albania attacks.

    • Hatred. A substantial fraction of Albanians despise Botswana, and the Albanian government’s decisionmaking process empowers that faction. Albania attacks.

    • Blame. Albania suffers an attack, leak, security breach, or embarrassment from one or more malcontents/​spies/​saboteurs/​assassins/​terrorists. Albania incorrectly blames Botswana — for rational reasons, for political convenience, or just due to bad epistemics. Albania attacks.

  • Intentional conflict due to multi-agent forces. This scenario is currently difficult to imagine. But perhaps crazy stuff happens when power increases, relative power is unstable, technology confuses states, and memetic chaos reigns. Roughly, I imagine a multi-agent failure scenario like this:

    • Offense outpaces defense. New technologies are leaked, are developed independently by many states, or cannot be kept secret. The capability to devastate civilization, which in 2021 was restricted to the major nuclear states, is held by many states. Even if none are malevolent, all are afraid, and domestic political forces (which are more chaotic than they were in 2021) make one or two states do crazy stuff.

  • An accident. “If the Earth is destroyed, it will probably be by mistake.”[3]

    • Automatic counterattacks. AI, AI-enabled military technology, and the prospect of future advances foster chaos and uncertainty. International tension increases in general, and tension between Albania and Botswana increases in particular. Offensive capabilities increase and are on hair trigger.[4] Eventually there’s an accident, miscommunication, glitch, or some anomaly resulting from multiple complex systems interacting faster than humans can understand. Albania automatically launches a “counterattack.”

III. Risk factors

Great power conflict is generally bad, and we can list high-level scenarios to avoid, such as those in the previous section. But what can we do more specifically to prevent great power conflict?

Off the top of my head, risk factors for the above scenarios include:

  • International cooperation/​trust/​unity/​comity decreases (in general or between particular great powers)[5]

  • Fear about other states’ capabilities and goals increases (in general or between particular great powers)

  • Chaos increases

  • States’ relative power is in flux and uncertain

  • There is conflict (that could escalate), especially international violence or conquest, especially involving a great power (e.g., a great power annexes territory, or there is a proxy war)

  • More states acquire devastating offensive capabilities beyond the power of any defensive capabilities (this needs nuance but is prima facie generally true)[6]

It also matters what and how regular people and political elites think about AI and emerging technology. Spreading better memes may be generally more tractable than reducing the risk factors above, because it’s pulling the rope sideways, although the benefits of better memes are limited.

Finally, the same forces from emerging technology, international relations, and beliefs and modes of thinking about AI that affect great power conflict will also affect:

  • How quickly superintelligence is developed

  • The extent to which there is an international arms race

  • Regulations and limits on AI, locally and globally

  • Hardware accessibility

Interventions affecting the probability and nature of great power conflict will also have implications for these variables.

Please comment on what should be added or changed, and please alert me to any relevant sources you’ve found useful. Thanks!


  1. ↩︎

    My analysis is abstract. Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here.

  2. ↩︎

    Adapted from Nick Bostrom’s Vulnerable World Hypothesis, section “Type-2a.” My definition includes scenarios in which a single actor chooses to devastate civilization; while this may not technically be great power conflict, I believe it is sufficiently similar that its inclusion is analytically prudent.

  3. ↩︎
  4. ↩︎

    Future weapons will likely be on hair trigger for the same reasons that nukes have been: swifter second strike capabilities could help states counterattack and thus defend themselves better in some circumstances, it makes others less likely to attack since the decision to use hair trigger is somewhat transparent, and there is emotional/​psychological/​political pressure to take them down with us.

  5. ↩︎

    Currently the world doesn’t include large, powerful groups, coordinated at the state level, that totally despise and want to destroy each other. If it ever does, devastation occurs by default.

  6. ↩︎

    Another potential desideratum is differential technological progress. Avoiding military development is infeasible to do unilaterally, but perhaps we can avoid some particularly dangerous capabilities or do multilateral arms control. Unfortunately, this is unlikely: avoiding certain technologies is costly because you don’t know what you’ll find, and effective multilateral arms control is really hard.