Christian Ruhl, Founders Pledge
I am a Senior Researcher at Founders Pledge, where I work on global catastrophic risks. Previously, I was the program manager for Perry World House’s research program on The Future of the Global Order: Power, Technology, and Governance. I’m interested in biosecurity, nuclear weapons, the international security implications of AI, probabilistic forecasting and its applications, history and philosophy of science, and global governance. Please feel free to reach out to me with questions or just to connect!
Hi Haydn,
This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts’ underperformance in the forecasting tournaments, and I think there might be something to that explanation.
We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.
What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.
I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes was that a murderous totalitarian regime would be the first to develop nuclear weapons, become a permanent regional hegemon, or worse, a global superpower. I think the atomic scientists’ and U.S. leadership’s decision then was the right one, given their uncertainties at the time.
I think it would be especially interesting to see whether misperception is actually more common historically. But I think there are examples of “racing” where assessments were accurate or even under-confident (as you mention, thermonuclear weapons).
Thanks again for writing this! I think you raise a really important question — when is AI competition “suboptimal”?[2]
https://www.jstor.org/stable/43785861
In Charles Glaser’s sense (https://www.belfercenter.org/sites/default/files/files/publication/glaser.pdf)