Thanks for engaging so closely with the report! I really appreciate this comment.
Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.
I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable.
But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing.
I also think this exchange illustrates why we need more research on the strategic stability questions.
Thanks for engaging so closely with the report! I really appreciate this comment.
Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.
I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable.
But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing.
I also think this exchange illustrates why we need more research on the strategic stability questions.
Thanks again for the comment!