Interesting talk. I agree with the core model of great power conflict being a significant catastrophic risk, including via leading to nuclear war. I also agree that emerging tech is a risk factor, and emerging tech governance a potential cause area, albeit one with uncertain tractability.
I would have guessed AI and bioweapons were far more dangerous than space mining and gene editing in particular; I’d have guessed those two were many decades off from having a significant effect, and preventing China from gene editing seems low-tractability. Geoengineering seems low-scale, but might be particularly tractable since we already have significant knowledge about how it could take place and what the results might be. Nanotech seems like another low-probability high-impact uncertain-tractability emerging tech, though my sense is it doesn’t have as obvious a path to large-scale application as AI or biotech.
I don’t understand all the statistical analysis, but the table on page 7 is pretty useful for summarizing the historical mean and spread of time-gaps between conflicts of a given casualty scale. As a rule of thumb, average waiting time for a conflict with >= X million casualties is about 14 years * sqrt(X), and the mean absolute deviation is about equal to the average. (This is using the ‘rescaled’ data, which buckets historical events based on casualties as a proportion of population; this feels to me like a better way to generalize than by considering raw casualty numbers.)
They later mention that the data are consistent with homogeneous Poisson distributions, especially for larger casualty scales. That is, the waiting time between conflicts of a given scale can be modeled as exponentially distributed, with a mean waiting time that doesn’t change over time. So looking at that table should in theory give you a sense of the likelihood of future conflicts.
But I think it’s very likely that, as Brian notes in the talk, nuclear weapons have qualitatively changed the distribution of war casualties, reducing the likelihood of say 1-20 million casualty wars but increasing the proportion of wars with casualties in the hundreds of millions or billions. I suspect that when forecasting future conflicts it’s more useful to consider specific scenarios and perhaps especially-relevant historical analogues, though the Taleb analysis is useful for forming a very broad outside-view prior.
Interesting talk. I agree with the core model of great power conflict being a significant catastrophic risk, including via leading to nuclear war. I also agree that emerging tech is a risk factor, and emerging tech governance a potential cause area, albeit one with uncertain tractability.
I would have guessed AI and bioweapons were far more dangerous than space mining and gene editing in particular; I’d have guessed those two were many decades off from having a significant effect, and preventing China from gene editing seems low-tractability. Geoengineering seems low-scale, but might be particularly tractable since we already have significant knowledge about how it could take place and what the results might be. Nanotech seems like another low-probability high-impact uncertain-tractability emerging tech, though my sense is it doesn’t have as obvious a path to large-scale application as AI or biotech.
--
The Taleb paper mentioned is here: http://www.fooledbyrandomness.com/longpeace.pdf
I don’t understand all the statistical analysis, but the table on page 7 is pretty useful for summarizing the historical mean and spread of time-gaps between conflicts of a given casualty scale. As a rule of thumb, average waiting time for a conflict with >= X million casualties is about 14 years * sqrt(X), and the mean absolute deviation is about equal to the average. (This is using the ‘rescaled’ data, which buckets historical events based on casualties as a proportion of population; this feels to me like a better way to generalize than by considering raw casualty numbers.)
They later mention that the data are consistent with homogeneous Poisson distributions, especially for larger casualty scales. That is, the waiting time between conflicts of a given scale can be modeled as exponentially distributed, with a mean waiting time that doesn’t change over time. So looking at that table should in theory give you a sense of the likelihood of future conflicts.
But I think it’s very likely that, as Brian notes in the talk, nuclear weapons have qualitatively changed the distribution of war casualties, reducing the likelihood of say 1-20 million casualty wars but increasing the proportion of wars with casualties in the hundreds of millions or billions. I suspect that when forecasting future conflicts it’s more useful to consider specific scenarios and perhaps especially-relevant historical analogues, though the Taleb analysis is useful for forming a very broad outside-view prior.