First, I am not an academic in this area, and as such my observations will not be strictly bound to the models in question, as I think there are other risks that have not been explicitly examined that are having an effect. (Also, the current situation in Ukraine appears to represent a substantial risk of an unintended wider war which could ignite by a fog of war or a misread of the enemy event, and then there is the problem of echo-chambers reinforcing poor data or poor interpretation of good data.)
“No plan of operations extends with certainty beyond the first encounter with the enemy’s main strength.” - Moltke the Elder
The chaotic nature of war is a major problem, and when delivery times of nuclear weapons range from less than 5 minutes for closely situated SLBMs to 30 minutes for ICBMs to hours for bombers, the ability to make the right call under extreme stress is a serious problem.
We also need to look at close calls and the role of luck. The Cuban Missile Crisis did not go nuclear because one thread held—Vasili Arkhipov’s vote to not launch a nuclear torpedo. Able Archer 83 was a case where at a time of heightened tensions a military exercise by NATO was being interpreted in the Soviet Union as a ruse of war. (We cannot control or anticipate the mindset of our opponent who may be predisposed to assume the worst about an adversary’s intensions.) And there are the possibilities of technical issues in detections systems resulting in false positives such as the case in September 26th, 1983 when cloud reflections sent false indications of ICBM launches in the United States by a malfunctioning Soviet launch detection system. Again, one thread probably held in that Stanislav Petrov probably acted above his pay grade by not simply passing the indications further up the chain of command because the pattern that he saw was inconsistent with what he expected a first strike would look like.
So one of many historical weaknesses have already been shown that could have caused a tense situation to tip into a nuclear war despite the desire for it to not do so by any sane actor. I don’t think this is taken into account by the model—and it may be very difficult to do so. Also, the number of data points that we have since the beginning of the nuclear era may be insufficient to formulate a suitable model. And there is also the problem of proliferation to a wider number of actors that would increase probabilities of a nuclear exchange, and changes of mindset around the use of tactical nuclear warheads. (Russia for example has a doctrine that permits first tactical use, and Putin has threatened nuclear escalation in the Ukraine conflict.)
Again, I am not an academic, and someone with greater knowledge and expertise can probably poke holes in what is simply a non-expert looking at patterns and the behaviour of people as individuals and in groups at times of extreme stress, and the problems of technical malfunction. (Lastly, the aggravating effects of climate change over time will probably also change the calculus of catastrophic war, and that also does not appear to have been factored in, and what of Graham Allison’s “Thucydides Trap”?)
I will be most interested to follow this discussion further as it is of much more than academic interest for obvious reasons.
I think the Thucydides Trap thing is a significant omission that might increase the probabilities listed here. If we forget the historical data for a minute and just intuitively look at how this could most foreseeably happen, the world’s two greatest powers are at a moment of peak tensions and saber-rattling, with a very plausible conflict spark over Taiwan. Being “tough on China” is also one of the only issues that both Republicans and Democrats can agree on in an increasingly polarized society. All of which fits Allison’s hypothesis that incumbent hegemons rarely let others catch up to them without a fight. So the risk of a great power war—and of escalation—intuitively seems higher today than it was in 2000, for instance, though neither the constant risk nor durable peace hypotheses seem to reflect that.
Maybe the next step is a “fluctuating risk” hypothesis that takes certain (admittedly tough to measure) global conditions of the world right now into account, rather than just the historical frequency of such wars. This would probably be less useful the longer the time horizon we’re trying to model—who knows if US/China will still be the riskiest conflict 50 years from now? - so I don’t want to overstate our confidence and its impact on overall risk would need to be sufficiently marginal. But I also don’t think prior major wars were completely unforeseeable events (in 1930, if you had to predict which countries would be likeliest to fight the next great power war, you’d not be blindsided to learn Germany would be involved). So in theory, a cooling of great power tensions could decrease our risk estimate, while further decoupling of great power economies could increase it, etc.
I agree with this. I think there’s multiple ways to generate predictions and couldn’t cover everything in one post. So while here I used broad historical trends, I think that considerations specific to US-China, US-Russia, and China-India relations should also influence our predictions. I discuss a few of those considerations on pp. 59-62 of my full report for Founders Pledge and hope to at least get a post on US-China relations out within the next 2-3 months.
One quick hot take: I think Allison greatly overestimates the proportion of power transitions that end in conflict. It’s not actually true that “incumbent hegemons rarely let others catch up to them without a fight” (emphasis mine). So, while I haven’t run the numbers yet, I’ll be somewhat surprised if my forecast of a US-China war ends up being higher than ~1 in 3 this century, and very surprised if it’s >50%. (Metaculus has it at 15% by 2035).
Makes sense, and I’m not surprised to hear Allison may overestimate the risk. By coincidence, I just finished a rough cost/benefit analysis of U.S. counterterrorism efforts in Afghanistan for my studies, and his book on Nuclear Terrorism also seemed to exaggerate that risk. (I do give him credit for making an explicit prediction, though, a few years before most of us were into that sort of thing).
In any case, I look forward to a more detailed read of your Founders Pledge report once my exams end next week. The Evaluating Interventions section seems like precisely what I’ve been looking for in trying to plan my own foreign policy career.
First, I am not an academic in this area, and as such my observations will not be strictly bound to the models in question, as I think there are other risks that have not been explicitly examined that are having an effect. (Also, the current situation in Ukraine appears to represent a substantial risk of an unintended wider war which could ignite by a fog of war or a misread of the enemy event, and then there is the problem of echo-chambers reinforcing poor data or poor interpretation of good data.)
“No plan of operations extends with certainty beyond the first encounter with the enemy’s main strength.” - Moltke the Elder
The chaotic nature of war is a major problem, and when delivery times of nuclear weapons range from less than 5 minutes for closely situated SLBMs to 30 minutes for ICBMs to hours for bombers, the ability to make the right call under extreme stress is a serious problem.
We also need to look at close calls and the role of luck. The Cuban Missile Crisis did not go nuclear because one thread held—Vasili Arkhipov’s vote to not launch a nuclear torpedo. Able Archer 83 was a case where at a time of heightened tensions a military exercise by NATO was being interpreted in the Soviet Union as a ruse of war. (We cannot control or anticipate the mindset of our opponent who may be predisposed to assume the worst about an adversary’s intensions.) And there are the possibilities of technical issues in detections systems resulting in false positives such as the case in September 26th, 1983 when cloud reflections sent false indications of ICBM launches in the United States by a malfunctioning Soviet launch detection system. Again, one thread probably held in that Stanislav Petrov probably acted above his pay grade by not simply passing the indications further up the chain of command because the pattern that he saw was inconsistent with what he expected a first strike would look like.
So one of many historical weaknesses have already been shown that could have caused a tense situation to tip into a nuclear war despite the desire for it to not do so by any sane actor. I don’t think this is taken into account by the model—and it may be very difficult to do so. Also, the number of data points that we have since the beginning of the nuclear era may be insufficient to formulate a suitable model. And there is also the problem of proliferation to a wider number of actors that would increase probabilities of a nuclear exchange, and changes of mindset around the use of tactical nuclear warheads. (Russia for example has a doctrine that permits first tactical use, and Putin has threatened nuclear escalation in the Ukraine conflict.)
Again, I am not an academic, and someone with greater knowledge and expertise can probably poke holes in what is simply a non-expert looking at patterns and the behaviour of people as individuals and in groups at times of extreme stress, and the problems of technical malfunction. (Lastly, the aggravating effects of climate change over time will probably also change the calculus of catastrophic war, and that also does not appear to have been factored in, and what of Graham Allison’s “Thucydides Trap”?)
I will be most interested to follow this discussion further as it is of much more than academic interest for obvious reasons.
I think the Thucydides Trap thing is a significant omission that might increase the probabilities listed here. If we forget the historical data for a minute and just intuitively look at how this could most foreseeably happen, the world’s two greatest powers are at a moment of peak tensions and saber-rattling, with a very plausible conflict spark over Taiwan. Being “tough on China” is also one of the only issues that both Republicans and Democrats can agree on in an increasingly polarized society. All of which fits Allison’s hypothesis that incumbent hegemons rarely let others catch up to them without a fight. So the risk of a great power war—and of escalation—intuitively seems higher today than it was in 2000, for instance, though neither the constant risk nor durable peace hypotheses seem to reflect that.
Maybe the next step is a “fluctuating risk” hypothesis that takes certain (admittedly tough to measure) global conditions of the world right now into account, rather than just the historical frequency of such wars. This would probably be less useful the longer the time horizon we’re trying to model—who knows if US/China will still be the riskiest conflict 50 years from now? - so I don’t want to overstate our confidence and its impact on overall risk would need to be sufficiently marginal. But I also don’t think prior major wars were completely unforeseeable events (in 1930, if you had to predict which countries would be likeliest to fight the next great power war, you’d not be blindsided to learn Germany would be involved). So in theory, a cooling of great power tensions could decrease our risk estimate, while further decoupling of great power economies could increase it, etc.
I agree with this. I think there’s multiple ways to generate predictions and couldn’t cover everything in one post. So while here I used broad historical trends, I think that considerations specific to US-China, US-Russia, and China-India relations should also influence our predictions. I discuss a few of those considerations on pp. 59-62 of my full report for Founders Pledge and hope to at least get a post on US-China relations out within the next 2-3 months.
One quick hot take: I think Allison greatly overestimates the proportion of power transitions that end in conflict. It’s not actually true that “incumbent hegemons rarely let others catch up to them without a fight” (emphasis mine). So, while I haven’t run the numbers yet, I’ll be somewhat surprised if my forecast of a US-China war ends up being higher than ~1 in 3 this century, and very surprised if it’s >50%. (Metaculus has it at 15% by 2035).
Makes sense, and I’m not surprised to hear Allison may overestimate the risk. By coincidence, I just finished a rough cost/benefit analysis of U.S. counterterrorism efforts in Afghanistan for my studies, and his book on Nuclear Terrorism also seemed to exaggerate that risk. (I do give him credit for making an explicit prediction, though, a few years before most of us were into that sort of thing).
In any case, I look forward to a more detailed read of your Founders Pledge report once my exams end next week. The Evaluating Interventions section seems like precisely what I’ve been looking for in trying to plan my own foreign policy career.