oeg—this is an important and apparently neglected topic.
Too many people seem to think that extinction risk only arises with AGI, and they neglect the many ways that near-term narrow AI can amplify other X-risks. I think you’re correct that narrow AI can amplify nuclear war risks.
I would suggest a couple of other ways this could happen:
small, autonomous drones could greatly increase the risk of assassinations of major nation-state leaders, esp. if the drones have some AI face-recognition abilities, long-term loitering abilities (e.g. lying in wait for ideal opportunities), and tactical planning abilities (to evade security forces counter-measures). (e.g. see the book ‘Swarm Troopers’ by David Hambling). This isn’t a matter of large LAWS swarms escalating conventional conflict; rather, it offers the opportunity to terrorists, dissidents, separatists, partisans, rebels, etc to eliminate heads of state they consider bad—but from a great distance in time and space, with a high degree of plausible deniability. And it opens the possibility of highly manipulative false flag operations.
AI-enabled deepfake videos could portray heads of state or military leaders (falsely) announcing escalations of conventional or nuclear strikes, which could lead their enemies to counter-escalate; the information environment could become so corrupted and unreliable that the likelihood of making strategic errors under the ‘fog of war’ might be greatly increased.
oeg—this is an important and apparently neglected topic.
Too many people seem to think that extinction risk only arises with AGI, and they neglect the many ways that near-term narrow AI can amplify other X-risks. I think you’re correct that narrow AI can amplify nuclear war risks.
I would suggest a couple of other ways this could happen:
small, autonomous drones could greatly increase the risk of assassinations of major nation-state leaders, esp. if the drones have some AI face-recognition abilities, long-term loitering abilities (e.g. lying in wait for ideal opportunities), and tactical planning abilities (to evade security forces counter-measures). (e.g. see the book ‘Swarm Troopers’ by David Hambling). This isn’t a matter of large LAWS swarms escalating conventional conflict; rather, it offers the opportunity to terrorists, dissidents, separatists, partisans, rebels, etc to eliminate heads of state they consider bad—but from a great distance in time and space, with a high degree of plausible deniability. And it opens the possibility of highly manipulative false flag operations.
AI-enabled deepfake videos could portray heads of state or military leaders (falsely) announcing escalations of conventional or nuclear strikes, which could lead their enemies to counter-escalate; the information environment could become so corrupted and unreliable that the likelihood of making strategic errors under the ‘fog of war’ might be greatly increased.