On flash-war risks, I think a key variable is what the actual forcing function is on decision speed and the key outcome you care about is the decision quality.
Fights where escalation is more constrained by decision making speed than weapon speed are where we should expect flash war dynamics. These could include: short-range conflicts, cyber wars, the use of directed energy weapons, influence operations/propaganda battles, etc.
For nuclear conflict, unless some country gets extremely good at stealth, strategic deception, and synchronized mass incapacitation/counterforce, there will still be warning and a delay before impact. The only reasons to respond faster than dictated by the speed of the adversary weapons and delays in your own capacity to act would be if doing so could further reduce attrition, or enable better retaliation… but I don’t see much offensive prospect for that. If the other side is doing a limited strike, then you want to delay escalation/just increase survivability, if the other side is shooting for an incapacitating strike, then their commitment will be absolutely massive and their pre-mobilization high, so retaliation would be your main option left at that point anyway. Either way, you might get bombers off the ground and cue up missile defenses, but for second strike I don’t see that much advantage to speeds faster than those imposed by the attacker, especially given the risk of acting on false alarm. This logic seems to be clearly present in all the near miss cases: there is the incentive to wait for more information from more sensors.
Improving automation in sensing quality, information fusion, and attention rationing would all seem useful for finding false alarms faster. In general it would be interesting to see more attention put into AI-enabled de-escalation, signaling, and false alarm reduction.
I think most of the examples of nuclear risk near misses favor the addition of certain types of autonomy, namely those that increase sensing redundancy and thus contribute to to improving decision quality and expanding the length of the response window. To be concrete:
For the Stanislav example: if the lights never start flashing in the first place because of the lack of radar return (e.g. if the Soviets had more space-based sensors), then there’d be no time window for Stanislav to make a disastrous mistake. The more diverse and high quality sensors you have, and the better feature detection you have, the more accurate a picture you will have and the harder it will be for the other side to trick you.
If during the Cuban missile crisis, the submarine which Arkhipov was on knew that the U.S. was merely dropping signaling charges (not attacking), then there would have been no debate about nuclear attack: the Soviets would have just known they’d been found.
In the training tape false alarm scenario: U.S. ICBMs can wait to respond because weapon arrival is not instant, satellite sensors all refute the false alarm: catastrophe averted. If you get really redundant sensor systems that can autonomously refute false alarms, you don’t get such a threatening alert in the first place, just a warning that something is broken in your overall warning infrastructure: this is exactly what you want.
Full automation of NC3 is basically a decision to attack, and something you’d only want to activate at the end of a decision window where you are confident that you are being attacked.
Thanks for engaging so closely with the report! I really appreciate this comment.
Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.
I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable.
But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing.
I also think this exchange illustrates why we need more research on the strategic stability questions.
On flash-war risks, I think a key variable is what the actual forcing function is on decision speed and the key outcome you care about is the decision quality.
Fights where escalation is more constrained by decision making speed than weapon speed are where we should expect flash war dynamics. These could include: short-range conflicts, cyber wars, the use of directed energy weapons, influence operations/propaganda battles, etc.
For nuclear conflict, unless some country gets extremely good at stealth, strategic deception, and synchronized mass incapacitation/counterforce, there will still be warning and a delay before impact. The only reasons to respond faster than dictated by the speed of the adversary weapons and delays in your own capacity to act would be if doing so could further reduce attrition, or enable better retaliation… but I don’t see much offensive prospect for that. If the other side is doing a limited strike, then you want to delay escalation/just increase survivability, if the other side is shooting for an incapacitating strike, then their commitment will be absolutely massive and their pre-mobilization high, so retaliation would be your main option left at that point anyway. Either way, you might get bombers off the ground and cue up missile defenses, but for second strike I don’t see that much advantage to speeds faster than those imposed by the attacker, especially given the risk of acting on false alarm. This logic seems to be clearly present in all the near miss cases: there is the incentive to wait for more information from more sensors.
Improving automation in sensing quality, information fusion, and attention rationing would all seem useful for finding false alarms faster. In general it would be interesting to see more attention put into AI-enabled de-escalation, signaling, and false alarm reduction.
I think most of the examples of nuclear risk near misses favor the addition of certain types of autonomy, namely those that increase sensing redundancy and thus contribute to to improving decision quality and expanding the length of the response window. To be concrete:
For the Stanislav example: if the lights never start flashing in the first place because of the lack of radar return (e.g. if the Soviets had more space-based sensors), then there’d be no time window for Stanislav to make a disastrous mistake. The more diverse and high quality sensors you have, and the better feature detection you have, the more accurate a picture you will have and the harder it will be for the other side to trick you.
If during the Cuban missile crisis, the submarine which Arkhipov was on knew that the U.S. was merely dropping signaling charges (not attacking), then there would have been no debate about nuclear attack: the Soviets would have just known they’d been found.
In the training tape false alarm scenario: U.S. ICBMs can wait to respond because weapon arrival is not instant, satellite sensors all refute the false alarm: catastrophe averted. If you get really redundant sensor systems that can autonomously refute false alarms, you don’t get such a threatening alert in the first place, just a warning that something is broken in your overall warning infrastructure: this is exactly what you want.
Full automation of NC3 is basically a decision to attack, and something you’d only want to activate at the end of a decision window where you are confident that you are being attacked.
Thanks for engaging so closely with the report! I really appreciate this comment.
Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.
I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable.
But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing.
I also think this exchange illustrates why we need more research on the strategic stability questions.
Thanks again for the comment!