(Below written by Peter in collaboration with Josh.)
It sounds like I have a somewhat different view of Knightian uncertainty, which is fine—I’m not sure that it substantially affects what we’re trying to accomplish. I’ll simply say that, to the extent that Knight saw uncertainty as signifying the absence of “statistics of past experience,” nuclear war strikes me as pretty close to a definitional example. I think we make the forecasting challenge easier by breaking the problem into pieces, moving us closer to risk. That’s one reason I wanted to add conventional conflict between NATO and Russia as an explicit condition: NATO has a long history of confronting Russia and, by and large, managed to avoid direct combat.
By contrast, the extremely limited history of nuclear war does not enable us to validate any particular model of the risk. I fear that the assumptions behind the models you cite may not work out well in practice and would like to see how they perform in a variety of as-similar-as-possible real world forecasts. That said, I am open to these being useful ways to model the risk. Are you aware of attempts to validate these types of methods as applied to forecasting rare events?
On the ignorance prior:
I agree that not all complex, debatable issues imply probabilities close to 50-50. However, your forecast will be sensitive to how you define the universe of “possible outcomes” that you see as roughly equally likely from an ignorance prior. Why not define the possible outcomes as: one-off accident, containment on one battlefield in Ukraine, containment in one region in Ukraine, containment in Ukraine, containment in Ukraine and immediately surrounding countries, etc.? Defining the ignorance prior universe in this way could stack the deck in favor of containment and lead to a very low probability of large-scale nuclear war. How can we adjudicate what a naive, unbiased description of the universe of outcomes would be?
As I noted, my view of the landscape is different: it seems to me that there is a strong chance of uncontrollable escalation if there is direct nuclear war between Russia and NATO. I agree that neither side wants to fight a nuclear war—if they did, we’d have had one already!— but neither side wants its weapons destroyed on the ground either. That creates a strong incentive to launch first, especially if one believes the other side is preparing to attack. In fact, even absent that condition, launching first is rational if you believe it is possible to “win” a nuclear war, in which case you want to pursue a damage-limitation strategy. If you believe there is a meaningful difference between 50 million dead and 100 million dead, then it makes sense to reduce casualties by (a) taking out as many of the enemy’s weapons as possible; (b) employing missile defenses to reduce the impact of whatever retaliatory strike the enemy manages; and (c) building up civil defenses (fallout shelters etc.) such that more people survive whatever warheads survive (a) and (b). In a sense “the logic of nuclear war” is oxymoronic because a prisoner’s dilemma-type dynamic governs the situation such that, even though cooperation (no war) is the best outcome, both sides are driven to defect (war). By taking actions that seem to be in our self-interest we ensure what we might euphemistically call a suboptimal outcome. When I talk about “strategic stability,” I am referring to a dynamic where the incentives to launch first or to launch-on-warning have been reduced, such that choosing cooperation makes more sense. New START (and START before it) attempts to boost strategic stability by establishing nuclear parity (at least with respect to strategic weapons). But its influence has been undercut by other developments that are de-stabilizing.
Thank you again for the thoughtful comments, and I’m happy to engage further if that would be clarifying or helpful to future forecasting efforts.
(Below written by Peter in collaboration with Josh.)
It sounds like I have a somewhat different view of Knightian uncertainty, which is fine—I’m not sure that it substantially affects what we’re trying to accomplish. I’ll simply say that, to the extent that Knight saw uncertainty as signifying the absence of “statistics of past experience,” nuclear war strikes me as pretty close to a definitional example. I think we make the forecasting challenge easier by breaking the problem into pieces, moving us closer to risk. That’s one reason I wanted to add conventional conflict between NATO and Russia as an explicit condition: NATO has a long history of confronting Russia and, by and large, managed to avoid direct combat.
By contrast, the extremely limited history of nuclear war does not enable us to validate any particular model of the risk. I fear that the assumptions behind the models you cite may not work out well in practice and would like to see how they perform in a variety of as-similar-as-possible real world forecasts. That said, I am open to these being useful ways to model the risk. Are you aware of attempts to validate these types of methods as applied to forecasting rare events?
On the ignorance prior:
I agree that not all complex, debatable issues imply probabilities close to 50-50. However, your forecast will be sensitive to how you define the universe of “possible outcomes” that you see as roughly equally likely from an ignorance prior. Why not define the possible outcomes as: one-off accident, containment on one battlefield in Ukraine, containment in one region in Ukraine, containment in Ukraine, containment in Ukraine and immediately surrounding countries, etc.? Defining the ignorance prior universe in this way could stack the deck in favor of containment and lead to a very low probability of large-scale nuclear war. How can we adjudicate what a naive, unbiased description of the universe of outcomes would be?
As I noted, my view of the landscape is different: it seems to me that there is a strong chance of uncontrollable escalation if there is direct nuclear war between Russia and NATO. I agree that neither side wants to fight a nuclear war—if they did, we’d have had one already!— but neither side wants its weapons destroyed on the ground either. That creates a strong incentive to launch first, especially if one believes the other side is preparing to attack. In fact, even absent that condition, launching first is rational if you believe it is possible to “win” a nuclear war, in which case you want to pursue a damage-limitation strategy. If you believe there is a meaningful difference between 50 million dead and 100 million dead, then it makes sense to reduce casualties by (a) taking out as many of the enemy’s weapons as possible; (b) employing missile defenses to reduce the impact of whatever retaliatory strike the enemy manages; and (c) building up civil defenses (fallout shelters etc.) such that more people survive whatever warheads survive (a) and (b). In a sense “the logic of nuclear war” is oxymoronic because a prisoner’s dilemma-type dynamic governs the situation such that, even though cooperation (no war) is the best outcome, both sides are driven to defect (war). By taking actions that seem to be in our self-interest we ensure what we might euphemistically call a suboptimal outcome. When I talk about “strategic stability,” I am referring to a dynamic where the incentives to launch first or to launch-on-warning have been reduced, such that choosing cooperation makes more sense. New START (and START before it) attempts to boost strategic stability by establishing nuclear parity (at least with respect to strategic weapons). But its influence has been undercut by other developments that are de-stabilizing.
Thank you again for the thoughtful comments, and I’m happy to engage further if that would be clarifying or helpful to future forecasting efforts.
Thanks for the detailed answers!