Notes on nukes, IR, and AI from “Arsenals of Folly” (and other books)
Richard Rhodes’s The Making of the Atomic Bomb has gotten lots of attention in AI circles lately, and it is a great read. I get why the people developing AI find it especially interesting, since a lot of it is about doing science and engineering and thinking about the consequences, but from my perspective as someone working on AI governance, the most powerful stuff was in the final few chapters as the scientists and policymakers begin to grapple with the wild implications of this new weapon for global politics.
Rhodes’s (chronologically) first follow-up, Dark Sun: The Making of the Atomic Bomb, is even more densely useful for thinking about emerging technology governance, and I probably recommend it even more strongly than TMOTAB for governance-focused readers.
However, I didn’t really start taking notes during my audiobook-listening until I started my third Rhodes tome, Arsenals of Folly: The Making of the Nuclear Arms Race. AOF is probably less applicable to AI governance than its predecessors, since it mostly focuses on a time when nuclear weapons had been around for several decades rather than when they were a new and transformative technology, but it still had a bunch of interesting details, and I figured I’d spare some of you the trouble of finding them by posting my notes to the forum. (Unfortunately, since I audiobooked, I don’t have page numbers.)
I’ve also included a list of other cool finds from my nuclear/Cold War reading from the last few months in an appendix.
Gell-Mann caveat
Before I say the rest of these facts, I should note that I had some “Gell-Mann Skepticism”[1] at some of Rhodes’s analysis. Mostly, in his increasingly strong rhetoric about the titular folly of nuclear weapons, he makes pretty questionable counterarguments to the usual cases for nuclear weapons being advantageous. He cites this paper by Jacek Kugler, who argues that nuclear-armed states didn’t seem to be able to impose their policy goals against non-nuclear armed states in a sample of Cold War conflicts like the Berlin Airlift, Vietnam War, invasion of Hungary, etc. It might initially seem surprising that, as Kugler claims, the nuclear-armed states lost these conflicts about as often as they won. But this ignores the enormous selection bias of what conflicting interests become actual disputes in the first place. It seems likely (or at least possible!) that lots of things that would’ve been disputes between non-nuclear powers get resolved way earlier in the process – the non-nuclear-armed states just don’t bother picking the fights – and the disputes that actually did happen would’ve been totally non-contestable without nuclear weapons.
While arguing in his conclusion that the opportunity cost of the arms race was incredibly high, Rhodes cites another wild claim, this time from economist Seymour Melman: military spending could have been spent on domestic investment, and “according to some [unspecified] rough estimates,” a marginal dollar of investment yields “20-25 cents of additional annual production in perpetuity.” This implies a >20% rate of return on capital, which seems wildly high and totally irreconcilable with actual historical rates of return.[2] So it does seem like he’s just prone to this kind of exaggeration when he tries to zoom out and look at the consequences, which is kind of a bummer.
With that disclaimer to take all of this with a grain of salt having been said:
Notes from Arsenals of Folly
The book starts with a chapter about Chernobyl (for some reason). Soviet industry wasn’t advanced enough to make the kinds of protective casings that Western and Japanese nuclear plants used, so they falsified accident risk estimates – even in the official numbers engineering students would learn. Partly as a result, the USSR had at least 13 serious reactor accidents before Chernobyl.
I usually think of the Cuban missile crisis as primarily resulting in arms control and detente, but Rhodes notes that it was critical to the Soviet decision to invest even more in their military. After agreeing to remove missiles from Cuba, the Soviet negotiator told his American counterpart, “Well, Mr. McCloy, we will honor this agreement, but I want to tell you something: you will never do this to us again.” And indeed what followed was an enormous 25-year arms buildup, with military spending regularly around 40% of GNP and the “Soviet military-industrial complex” becoming a dominant political force.
A French historical demographer, Emmanuel Todd, accurately extrapolated a bunch of demographic, political, and economic trends to predict the collapse of the Soviet union – including, impressively, predicting a nonviolent secession of the countries in eastern Europe – “in 10, 20, or 30 years” in a book published in the mid-70s. But “Sovietologists” in the west rejected this as naive speculation, in part because their own positions of influence relied on continued fears of Soviet domination.
Apparently Ronald Reagan may have gotten the idea for the Strategic Defense Initiative – “Star Wars” – from Edward Teller (a favorite supervillain from TMOTAB, Dark Sun, and the movie Oppenheimer) during a visit to Lawrence Livermore national labs in 1967.
Reagan became obsessed with SDI, but to the frustration of Rhodes and this reader, does not seem to have thought it through in much detail. He did not understand the game-theoretic reasons SDI was dangerous – namely, it breaks MAD and incentivizes a preemptive strike – and he hadn’t thought about how they wouldn’t stop nukes that weren’t ballistic missiles (dropped from bombers or attached to sea-launched cruise missiles). Meanwhile, he continued insisting to his own negotiators that the only acceptable goal was SDI and the abolition of nuclear weapons.
Reagan and Gorbachev met for the first time in Geneva in 1985, which produces some fun anecdotes like: Gorbachev makes a forceful intellectual and strategic case for re-evaluating the countries’ relationships. Reagan primarily reads from cue cards with classic Reagan aphorisms like “It isn’t people who create armaments but governments” and “People don’t get in trouble when they talk to each other, but about each other.” Gorbachev is like, what the hell is this, can we talk about anything substantive please.
Reagan and Gorbachev have incredibly repetitive arguments about SDI, and Gorbachev’s points do not seem to make it through to Reagan. Gorbachev starts anticipating the exact words Reagan would say: favorites included the Russian translation of “trust but verify” and an analogy between SDI and gas masks (as in, “even though we banned chemical weapons, nations held onto their gas masks”). Gorbachev seems to have opposed it primarily on the grounds that previous agreements to keep weapons out of space were just too important.
Reagan’s staff learned that the easiest way to get him, a former Hollywood actor, to learn things was movies. They stopped writing briefs about the foreign leaders he was about to meet and instead had whatever part of the Pentagon makes films produce short biopics of them (which, understandably, his staff also preferred to the briefings). Reagan was especially moved after seeing a movie on ABC, The Day After, which depicts the aftermath of a nuclear war; he wrote about it repeatedly in his diary, and it significantly increased his determination to reduce nuclear risk.
Reagan was allegedly into fundamentalist and borderline mystical stuff. According to Rhodes, Reagan at least flirted with this prophecy that the rapture might happen when America defeated the Soviet Union – only for it to be revealed that America’s charismatic leader was the devil, at which point Jesus would come back and defeat him. He may have signed the Intermediate-Range Nuclear Forces Treaty at a particular date and time identified as fortuitous by his wife’s astrologer. His obsession with SDI seems easier to interpret in this symbolism-heavy worldview (in addition to what Rhodes describes as a fantasy of America never having to negotiate). Gorbachev claims Reagan told him “I don’t know if you believe in reincarnation, but for me, I wonder if perhaps in a previous life, I was the inventor of the shield,” and that French president Francois Mitterand told him Reagan’s enthusiasm for SDI was “more mystic than rational.” It is worth noting, though, that Reagan was unusually sincere and ambitious in his desire to make progress on reducing the threat of nuclear war, despite what Rhodes portrays as an almost cartoonishly hawkish and manipulative cabinet.
Gorbachev and Reagan almost agreed in 1986 at Reykjavik to eliminate all nuclear weapons by 1996, but both could not let go of the SDI issue. It literally came down to one word: Soviet language restricted SDI to “laboratory testing” for 10 years, and Reagan’s advisors (probably falsely) told him that this would kill the program (which had barely even entered lab testing), and he refused to give it up. I’m a little surprised that the Soviets didn’t take this deal, given how determined Gorbachev was to reduce military spending and remake the international face of the Soviet Union, and especially given the technical challenges of SDI (and Reagan’s repeated offers to share the technology with the Soviets, of which Gorbachev is understandably skeptical). It seems like this is mostly due to the political constraints Gorbachev was facing domestically: he was pushing the establishment pretty far already, and going all the way to nuclear abolition without removing SDI could have gotten him replaced with a more hawkish alternative.
Some of my takeaways
These are mostly fairly obvious but reinforced by AOF:
Turns out estimates of a technology’s riskiness are subject to political and economic pressures.
Turns out policy change is more likely when top leaders are deeply bought into an issue mattering, and more likely to be effective when they have a solid understanding of the issue.
Humiliating your national rivals sometimes makes them really determined to avoid this happening again in the future.
Leaders, and the people in the room with them, can really make a difference.
Empathy is very useful in international relations. It seems like lots of Cold War mistakes (like Able Archer 83, both the exercise itself and the Soviet overreaction) resulted in part from conceiving of the other side as coldly strategic and basically evil – implicitly, “we think of the Soviets primarily in their capacity as the US’s main geopolitical rival, so they probably think of themselves in the same way” – rather than as another heterogeneous political system comprised of humans with mixed motivations.
Even powerful and strongly ideological national leaders are bound by domestic political constraints.
Economics-and-demography-driven outside views sometimes beat domain experts (though it’s a pretty close match overall).
Appendix: takeaways/interesting finds from related books
Arsenals of Folly capped a months-long nuclear/Cold War nerdsnipe caused by TMOTAB and Oppenheimer, so figured I’d also include some discoveries from these other books in this post.
From TMOTAB: There’s an incredible story where (German) Werner Heisenberg and (Danish) Neils Bohr discuss the possibility of a nuclear bomb with World War II underway. Heisenberg claims (after the war) that he meant to do some back-channel coordination with Allied scientists to slow down efforts to build the bomb, via implying that the German program was slow-going. Bohr thought Heisenberg was trying to elicit information about Allied nuclear efforts and even to get Bohr to cooperate with the Nazis. If Heisenberg is telling the truth (big “if”), an attempt by a scientist to communicate clearly about the risks of the technology he’s working on due to competitive pressure, in order to coordinate an international slowdown, was instead interpreted as cynical hype-spreading and resulted in even more alarm among the other actors, who redoubled their efforts to get there first.
From Dark Sun: When there are new and potentially really disruptive technologies, even not-particularly-radical leaders can be open to radical proposals (in the nuclear case, international control of the nuclear supply chain in order to stop nuclear proliferation). As Harry Winne, the VP of General Motors, wrote: “[Our proposal] may seem too radical, too advanced, too much beyond human experience. All these terms apply, with particular fitness, to the atomic bomb.” But the details really matter, including things like “how do you verify the treaty” and “is it really true that this part of the supply chain is such a bottleneck that it can ground international governance?”
From Dark Sun: when your job is to strategize about the possibility of incredibly deadly events, it’s hard to avoid missing moods and scope insensitivity. From the Wikipedia page for Single Integrated Operational Plan, the US’s plan for nuclear war updated every year from 1961 to 2003:
The execution of SIOP-62 was estimated to result in 285 million dead and 40 million casualties in the Soviet Union and China. Presented with all the facts and figures, Thomas D. White of the Air Force found the Plan “splendid.” Disregarding the human aspect, SIOP-62 represented an outstanding technological achievement: “SIOP-62 represented a technical triumph in the history of war planning. In less than fifteen years the United States had mastered a variety of complex technologies and acquired the ability to destroy most of an enemy’s military capability and much of the human habitation of a continent in a single day.” [Note: Arsenals of Folly notes that this was probably a huge underestimate of the death count, since it counted deaths from blasts only, and not from fires or radiation (which could together push the death toll over 1 billion), let alone the possibility of nuclear winter.]
From Dark Sun and Nukemap: I don’t think it’s really permeated public consciousness that the bombs in today’s US and Russian arsenals are >40 times more powerful than those dropped on Hiroshima (and much more powerful bombs have been tested by both countries).
From Odd Arne Westad’s The Cold War: A Global History, I was struck by the importance of public (and especially elite) perceptions of a country’s moral standing and legitimacy. According to Westad, the USSR had a significant intelligence advantage in the 1940s and early 1950s (when they notably stole many important nuclear secrets) in part because Western intellectuals saw communism as morally superior, and this changed over the course of the 1950s and 1960s, when (among other things) the extent of Stalin’s brutality became harder to deny and the US finally began to address its racial and gender inequalities. By the late ’60s, the West had a significant spy advantage. This is important because spies are important. This implies that it’s really strategically valuable to have some combination of actually following popular moral principles and a robust (especially elite-targeted) propaganda machine.
From The Cold War: When China invaded Vietnam in 1979 to punish it for toppling the Khmer Rouge in Cambodia, it lost half as many soldiers in 4 weeks as the US did in the entire Vietnam War. I don’t know what to take away from this, it’s just a crazy fact.
From The Cold War: Almost everyone who became a political leader outside Western democracies during the Cold War had to be, uh, truly exceptional. Such a high chance you’d be assassinated, arrested, or exiled in exchange for some standard-of-living perks (and maybe becoming immortalized by your country’s political culture) – I think this tradeoff was pretty unappealing for the vast majority of people, meaning politics in these settings attracted unusual individuals who scored highly on some combination of bravery, altruism, sociopathy, and egomania.
From TMOTAB, Dark Sun, and the Oppenheimer movie: there’s a really stark asymmetry between the power that researchers have to create things and the power they have to steer them once they exist.
Lots of other interesting connections between Oppenheimer and the AI situation in this post, especially about Rotblat’s “pointless” departure from the Manhattan Project once the Germans had lost and his eventual Nobel Prize for his work against nuclear war.
- ^
Is this a term? It should be a term. Like, you notice that the author seems to have gotten something wrong, and you consciously increase your skepticism at the rest of their claims to avoid Gell-Mann Amnesia.
- ^
E.g., this first google result for “historical rates of return” finds that risky assets like housing and equities generally average around 7%, and non-risky assets like bonds around 3%. Maybe the government can beat the market when it invests in public goods, but by ~15-20%?
Thanks a lot for the great post!
I’ve also been learning a lot lately about nuclear safety, deterrence, the cold war, etc. mostly inspired by the Oppenheimer movie. I’ve been looking for people to talk through these issues with.
If anybody reading this is looking to talk more about these kinds of issues DM me—I’d love to share what I’ve learned, see what other people have learned, and just talk about the fascinating history and ethics surrounding atomic weapons use.
Executive summary: The author shares key insights from several books on nuclear weapons and Cold War history that are relevant for thinking about AI governance today.
Key points:
Estimates of technology riskiness are vulnerable to political and economic pressures.
Policy change requires leaders to deeply understand and prioritize an issue.
Technology can change global politics in unpredictable ways.
Empathy and understanding rival perspectives is critical in international relations.
Leaders face domestic political constraints even if personally motivated.
Outside demographic and economic analyses can sometimes outperform domain experts.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.