This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts’ underperformance in the forecasting tournaments, and I think there might be something to that explanation.
We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.
What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.
I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes was that a murderous totalitarian regime would be the first to develop nuclear weapons, become a permanent regional hegemon, or worse, a global superpower. I think the atomic scientists’ and U.S. leadership’s decision then was the right one, given their uncertainties at the time.
I think it would be especially interesting to see whether misperception is actually more common historically. But I think there are examples of “racing” where assessments were accurate or even under-confident (as you mention, thermonuclear weapons).
Thanks again for writing this! I think you raise a really important question — when is AI competition “suboptimal”?[2]
Thanks for the kind words Christian—I’m looking forward to reading that report, it sounds fascinating.
I agree with your first point—I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn’t stop the arms buildup.
The question of whether over, under or calibrated confidence is more common is an interesting one that I’d like someone to research. It perhaps could be usefully narrowed to WWII & postwar USA. I offered some short examples, but this could easily be a paper. There are some theoretical reasons to expect overconfidence, I’d think: such as paranoia and risk-aversion, or political economy incentives for the military-industrial complex to overemphasise risk (to get funding). But yes, an interesting open empirical question.
I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible.
Given this, is it accurate to call Einstein’s letter a ‘tragedy’? The tragic part was continuing the nuclear program after the German program was shut down.
Hi Haydn,
This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts’ underperformance in the forecasting tournaments, and I think there might be something to that explanation.
We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.
What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.
I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes was that a murderous totalitarian regime would be the first to develop nuclear weapons, become a permanent regional hegemon, or worse, a global superpower. I think the atomic scientists’ and U.S. leadership’s decision then was the right one, given their uncertainties at the time.
I think it would be especially interesting to see whether misperception is actually more common historically. But I think there are examples of “racing” where assessments were accurate or even under-confident (as you mention, thermonuclear weapons).
Thanks again for writing this! I think you raise a really important question — when is AI competition “suboptimal”?[2]
https://www.jstor.org/stable/43785861
In Charles Glaser’s sense (https://www.belfercenter.org/sites/default/files/files/publication/glaser.pdf)
Thanks for the kind words Christian—I’m looking forward to reading that report, it sounds fascinating.
I agree with your first point—I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn’t stop the arms buildup.
The question of whether over, under or calibrated confidence is more common is an interesting one that I’d like someone to research. It perhaps could be usefully narrowed to WWII & postwar USA. I offered some short examples, but this could easily be a paper. There are some theoretical reasons to expect overconfidence, I’d think: such as paranoia and risk-aversion, or political economy incentives for the military-industrial complex to overemphasise risk (to get funding). But yes, an interesting open empirical question.
Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)
Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions
Given this, is it accurate to call Einstein’s letter a ‘tragedy’? The tragic part was continuing the nuclear program after the German program was shut down.