Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg
OR The Tragedy of the Einstein Letter and the Gaither Report; Cautionary Lessons from the Manhattan Project and the ‘Missile Gap’; Beware Assuming You’re in an AI Race; The illusory Atomic Gap, the illusory Missile Gap and the AGI Gap
Summary
In both the 1940s and 1950s, well-meaning and good people – the brightest of their generation – were convinced they were in an existential race with an expansionary, totalitarian regime. Because of this belief, they advocated for and participated in a ‘sprint’ race: the Manhattan Project to develop a US atomic bomb (1939-1945); and the ‘missile gap’ project to build up a US ICBM capability (1957-1962). These were both based on a mistake, however—the Nazis decided against a Manhattan Project in 1942, and the Soviets decided against an ICBM build-up in 1958. The main consequence of both was to unilaterally speed up dangerous developments and increase existential risk. Key participants, such as Albert Einstein and Daniel Ellsberg, described their involvement as the greatest mistake of their life.
Our current situation with AGI shares certain striking similarities and certain lessons suggest themselves: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.
I briefly cover the two case studies, discuss the atmosphere at RAND, then draw the comparison with AGI and explain my three takeaways. This short piece is mainly based on Richard Rhodes’ The Making of the Atomic Bomb and Daniel Ellsberg’s The Doomsday Machine. It was inspired by a Slack discussion with Di Cooke.
The ‘atomic gap’, the Einstein-Szilárd Letter, and the Manhattan Project
Here is a rough timeline of some key events around the Manhattan Project:
12 September 1933: Szilárd conceives the idea of a nuclear chain reaction, and keeps it a secret for the next six years.
2 August 1939: Einstein-Szilárd letter to Roosevelt advocates for setting up a Manhattan Project.
1 September 1939: Nazi invasion of Poland.
9 October 1941: Roosevelt approves the atomic program, subsequently Manhattan Project receives serious funding (eventually, 0.4% of GDP).
June 1942: Hitler decides against an atomic program for practical reasons.
December 1942: First chain reaction in Chicago. Szilard notes “I shook hands with Fermi and I said I thought this day would go down as a black day in the history of mankind.”
30 April 1945: Hitler kills himself.
7 May 1945: Nazi surrender.
July 1945: Szilárd petition (signed by 70 scientists) calls for the bomb to be used only after Japan had refused to surrender, and for the decision to be made by Truman personally – and reiterated the original intention was to defend against the Nazis.
6 and 9 August 1945: The USA bombs Hiroshima and Nagasaki.
29 August 1949: First successful Soviet nuclear test.
The Manhattan Project (and other US projects like the Apollo program) was a ‘sprint’ project, in which the peak year funding reached 0.4% of GDP (Stine, 2009, see also Grace, 2015).
Why did nuclear scientists like Szilárd, who kept the chain reaction secret and opposed nuclear weapons for decades after the war, advocate for and participate in the Manhattan Project? In Ellsberg’s words:
“How could he? The answer is he believed, even before others, that they were racing Hitler to the attainment of this power. It was German scientists, after all, who had first accomplished the fission of a heavy element. There seemed no reason to suppose that Germany could not stay ahead of any competitors in harnessing this unearthly energy to Hitler’s unlimited ambitions for conquest. The specter of a possible German monopoly, even a temporary one, on an atomic bomb drove the Manhattan Project scientists – above all the Jewish emigres from Europe […] until the day of Germany’s surrender.” (p.28)
For comparison with the ‘missile gap’, we could describe this as a perception of, or fear of, an ‘atomic gap’.
However, this was based on a mistake. During World War 2, the USA and UK were not in a desperate race with a powerful, totalitarian opponent. Neither the Nazis, the Japanese Empire nor the USSR had a serious nuclear program during the war. The Nazis considered a nuclear sprint, but they decided against it for three reasons. First, Speer and the nuclear physicists thought it would take three to four years to deliver—too late to make a difference to the war. Second, the Nazis were severely constrained in terms of raw materials and manpower, which were needed elsewhere in armaments production (Tooze, 2006). Third, Werner Heisenberg, principal scientist on the ‘Uranverein’ nuclear weapons program, was not able to guarantee that fission would not ignite the atmosphere. The Manhattan Project did not need to, and did not in practice, deter Hitler from using a nuclear weapon.
So the crucial effect of the advocacy and participation of the nuclear scientists in the Manhattan Project was to bring forward in time the advent of nuclear weapons. This is because it is reasonable to assume that the USA would not have ‘sprinted’ if many scientists did not advocate and volunteer for it. Indeed, it is plausible that the advent was brought forward by perhaps a decade. It is unclear whether the USA would have ‘sprinted’ to the same extent (or at all) outside of the context of WW2. As a reminder, the USA spent 0.4% of GDP on the Project. This would have been harder to justify after WW2. The Soviets were only able to catch up in four years after the war due to immense espionage from the Manhattan Project.
In addition to timing, one can also speculate about the manner in which nuclear weapons were introduced to the world. The signers of the Szilárd petition were concerned that if used on in ‘anger’, it would launch an arms race:
“If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation”
If there was not a perception that the use of nuclear weapons had ‘ended the war’ in Asia, then the intensity of the nuclear arms race may have been lessened. The really wrenching “what-if” conjecture is if the advent of nuclear weapons had been delayed until the mid 1950s. Early Cold War proposals for international control of nuclear weapons (or limits on their development, stockpiling and use) failed due to US lack of interest (Zaidi & Dafoe, 2021), but also Stalin’s paranoia and distrust (Gaddis, 1997). What if the nuclear bomb was not developed until after Stalin’s death on the 5 March 1953? The prospects for international controls on the development, stockpiling and use of nuclear weapons may have been much improved.
The ‘missile gap’, the Gaither Report, and RAND
Here is a rough timeline of some key events around the missile gap:
26 August 1957: USSR’s first successful ICBM test.
4 October 1957: Sputnik launch.
3 November 1957: Laika launch.
7 November 1957: Gaither Report claims a ‘missile gap’.
11 June 1957, 6 December 1957: Failed US tests.
28 November 1958: USA’s first successful ICBM test.
Summer 1958: DARPA and NASA established; National Defense Education Act passed.
7 June 1961: New National Intelligence Estimate (NIE) released internally (above Top Secret classification) – only 4 ICBMs had been observed.
21 October 1961: Gilpatric speech (influenced by Ellsberg) signals to Soviets that the USA knew that there was no ‘missile gap’.
30 October 1961: Tsar Bomba, most powerful nuclear test ever.
16 October 1962: Cuban Missile Crisis. Arguably the closest the world has ever come to nuclear war. Later, Kennedy says the odds of war were between “1/3 and even”.
The perception of a missile gap prompted a US ‘sprint’ project to develop and stockpile ICBMs and develop nuclear war plans.
Why did scientists like Ellsberg, who as a schoolboy in 1944 wrote an essay against nuclear weapons and who would spend the rest of his career as a famed whistleblower opposing nuclear weapons, advocate for and participate in this sprint? In his words:
“In the late fifties, I was given what seemed good reason to believe – on the basis of highly classified official information – that we were again in a desperate race with a powerful, totalitarian opponent […] This apprehension was based on illusion.” (p. 29)
Ellsberg notes that summer 1958 was the “high point of secret intelligence predications of an imminent vast Soviet superiority in deployed ICBMs, the ‘missile gap.’” (p. 34). The Air Force & CIA estimated that the USSR would have an ICBM fleet of “several hundred, perhaps as early as 1959 (with a crash effort), almost certainly by 1960-61, with thousands in the sixties” (p.35). Crucially, if there were a missile gap, the US would have been vulnerable to a first strike—Soviet ICBMs could have destroyed most of the US bomber fleet, preventing a US retaliatory strike. Just like Hitler would have had he had a nuclear monopoly, this advantage would either be a strong incentive for the USSR to launch a nuclear war, or be used as nuclear blackmail to force the USA to accept Soviet expansion.
However, this was based on a mistake, as would be established in National Intelligence Estimate NIE 11-8-61 THE SOVIET ICBM PROGRAM—EVIDENCE AND ANALYSIS. The estimate was that “the Soviets had exactly four ICBMS, soft, liquid-fuelled missiles at one site, Plesetsk. Currently we had about forty operational Atlas and Titan ICBMs […] the numbers were ten to one in our favour” (p.164).
The new estimate “totally contradicted the fundamental basis for [Ellsberg’s] concerns and work for the past several years.
It wasn’t just a matter of numbers, though that alone invalidated virtually all the classified analyses and studies I’d read and participated in for years. Since it seemed clear that the Soviets could have produced and deployed many, many more missiles in the three years since their first ICBM test, it put in question – it virtually demolished – the fundamental premise that the Soviets were pursuing a program of world conquest like Hitler’s.” (p.162)
“The 1959-62 period was their only opportunity to have such a disarming capability with missiles, either for blackmail purposes or an actual attack. […] Four missiles in 1960-61 was strategically equivalent to zero, in terms of such an aim. […]
Khrushchev had been totally bluffing about his missile production rates. He had said he was turning them out “like sausages”. […] about ICBMs it was a flagrant lie. Moreover, it meant that he had consciously forsworn the crash effort needed to give him a credible first-strike capability in the only period when that might have been feasible.
Our assumptions about his aims […] were now entirely in question.” (p. 162-3)
So the crucial effect of the advocacy and participation of the experts at RAND and elsewhere was to bring forward in time the advent of ICBMs, and to heighten the destabilising arms race. This is because it is reasonable to assume that the USA would not have sprinted for ICBMs if many intelligence, military and scientific experts were not convinced of a missile gap and advocated and participated in a US sprint. It is plausible that this brought forward the development and stockpiling of ICBMs by around five years (which is when the Soviets began building up their forces). It would have been harder to justify funding a sprint to the same extent without the missile gap fear. While missile technology progress would have occured, more may have gone into space research, as the Soviets did.
In addition to timing, one can speculate about the manner in which ICBMs were introduced. That is to say in a quick and fearful manner, at an intense moment of the Cold War, and in a way which played straight into the security dilemma. This intensified the nuclear arms race. The manner in which the USA signalled to the USSR may also have contributed to the intensity of the Berlin Crisis of 1961 and the Cuban Missile Crisis of 1962. The Gilpatric speech came during a Soviet Party Congress, four days after Khrushchev had offered an opening to the USA by withdrawing his ultimatum that the USA negotiate a peace treaty with East Germany by the end of 1961. The Gilpatric speech was interpreted as Kennedy’s response—to deliberately humiliate Khrushchev. The immediate response was two nuclear tests, including Tsar Bomba, the largest test ever (50-58 megatons). This may have contributed to Soviet elite perceptions of Kennedy as a risky militarist uninterested in agreements, and therefore contributed to the Cuban Missile Crisis (p.176-177).
However, it is important to note that this period was also incredibly creative on the arms control side. 1961 saw the publication of Bull’s The Control of the Arms Race, Schelling & Halperin’s Strategy and Arms Control; and Brennan’s Arms Control, Disarmament, and National Security. Together with the 1960 Daedalus special issue on Arms Control, these are seen as the “four bibles” of arms control. They are all linked to the Harvard-MIT joint seminar, many participants of which went into government, and later contributed to the first bilateral nuclear agreement on nuclear arms control fifty years ago in 1972 (Schelling 1985; Adler 1992).
Other possible examples
Two other examples of mistaken races come to mind. The first, in between our two cases in the early 1950s, is the ‘bomber gap’. From 1954-1957 the US National Intelligence Estimates estimated large numbers of Soviet long-range bombers, in the hundreds. In response, over that same period the US built up its bomber fleet to over 2,500 bombers. However, this was also based on a mistake. There were only 30 Soviet M-4 bombers in 1956, only 93 were ever produced, and indeed design flaws meant that they could not reach the continental United States. So the main consequence was to unilaterally speed up dangerous developments and increase existential risk.
The second is the Soviet bioweapons program of the 1970s, what we might call the ‘bioweapons gap’. The USSR believed the US were ahead (e.g. in genetics and genomics) and assumed that the US would cheat on the Biological Weapons Convention (BWC), signed in 1972. In response (at least in part), it cheated on the BWC, and carried out the largest bioweapons program in history. However, this was also based on a mistake. The Nixon Administration had in fact destroyed the US’ biological weapons and disbanded the program, as Nixon had announced on November 25, 1969. So the main consequence was to unilaterally speed up dangerous developments and increase existential risk.
Another possible example is long-range heavy strategic bombers in the 1930s—the US and UK invested heavily, the fascists did not. A smaller scale one could be Western mistaken intelligence about Iraq’s WMD programmes. I’d be interested to what extent these dynamics were also present in the US and Russian (and others’) cyber weapons development programs of the 2000s (e.g. the fear of a ‘Cyber Pearl Harbor’)– and interested in any other cases people are familiar with. More generally, these mistaken races could be seen as a subset of the security dilemma: when actions taken by state A (to increase its security) cause reactions by state B, decreasing the security of A and B (Herz 1950; Jervis 1978).
There are, of course, examples of mistaken intelligence in the other direction, that is overestimates of how long an adversary would take to achieve a capability—for example, the US estimate that the Soviets would take a decade to build a nuclear bomb. And there are examples of correct intelligence about other’s capabilities and intentions, for example perhaps the Dreadnought programme in the early 1900s.
The atmosphere at RAND
Ellsberg writes vividly and evocatively about the charged, almost messianic atmosphere of RAND during the ‘missile gap’ – the “obsessive ideation” that surrounded it. (Rhodes describes the Manhattan Project in similar vivid detail, but I have not quoted him.) In reading these, I was repeatedly struck by a strange, gnawing sense of familiarity – see if you have the same reaction.
On the importance, neglectedness and tractability of the problem – and the sense of mission:
“I found myself immersed in what seemed the most urgent concrete problem of uncertainty and decision-making that humanity had ever faced […] the challenge looked both more difficult and more urgent than almost anyone outside RAND seemed able to imagine.” (p. 35)
“nearly all the departments and individual analysts at RAND were obsessed with solving the single problem […] in the next few years […] The concentration of focus, the sense of a team effort of the highest urgency, was very much like that of the scientists in the Manhattan Project.” (p.36)
“there was our sense of mission, the burden of believing we knew more about the dangers ahead, and what might be done about them, than did the generals […] or Congress or the public, or even the President. It was an enlivening burden.” (p.37)
“From the analyses by men who became my mentors and closest colleagues, I had come to believe – like Szilard and Rotblat a generation earlier – that this was the best, indeed the only way, of increasing the chance of [survival]. (p.39)
On the intellectual culture:
“From my academic life, I was used to being in the company of very smart people, but it was apparent from the beginning that this was as smart a bunch of [people] as I had ever encountered. […] And it was even better than that. In the middle of the first session, I ventured – though I was the youngest, assigned to be taking notes, and obviously a total novice on the issues – to express an opinion. Rather than showing irritation or ignoring my comment, Herman Kahn […] looked at me soberly and said “You’re absolutely wrong.”
A warm glow spread throughout my body. This was the way my undergraduate fellows […] had routinely spoken to each other [… At Cambridge or Harvard] arguments didn’t remotely take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.”
And I had. […] I shared with my colleagues a sense of brotherhood, living and working with others for a transcendent cause.” (p.36)
In the late 1950s, it was overwhelmingly male (the previous quote was actually “as smart a bunch of men”, Ray Acheson highlights this point in her review of the book):
“During the cocktail interval at the frequent dinners that our wices took turns hosting, two or three men at a time would cluster in a corner to share secret reflections, sotto voce; the women didn’t have clearances. After the meal the wives would go together into the living room—for security reasons—leaving the men to talk secrets at the table.
There were almost no cleared women professionals at RAND then. The only exceptions I remember were [...] the daughter of Fleet Admiral Chester Nimitz; Alice Hsieh, a China analyst; and Albert Wohlstetter’s wife”
On the privileges and intensity of their life:
“Materially we led a privileged life. I started at RAND, just out of graduate study, at the highest salary my father had ever attained […] Working conditions were ideal […]
But my colleagues were driven men. They shared a feeling – soon transmitted to me – that we were in the most literal sense working to save the world. […]
The work was intense and unrelenting. The RAND building’s lights were kept on all night because researchers came in and out at all hours, on self-chosen schedules. At lunch […] we talked shop – nothing else.” (p.37)
“The first summer there, I worked seventy-hour weeks, devouring secret studies and analyses until late every night, to get up to speed on the problems and possible solutions.” (p. 38)
On the sense of urgency, consider this:
“Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.” (p.38)
Ellsberg ends the chapter:
“When my former Harvard faculty advisor heard in 1959 that I was going back to RAND as a permanent employee, he told me bitterly that I was “selling out (as an economist) for a high salary. I told him that after what I had learned the previous summer at RAND, I would gladly work there without pay. It was true. I couldn’t imagine a more important way to serve humanity.” (p.40)
To those who have been involved with effective altruism, rationality, existential risk, AGI research and development, and AI risk: does any of this sound at all familiar?
To make this explicit, I am stating that several of these aspects of the atmosphere, working life and intellectual culture at RAND are strikingly similar. This is especially true of the sense of urgency, secret insight, community and mission. The non-stop “shop” talk, self-chosen work schedules, high starting salaries (for some), the gloves-off intellectual debate will be familiar to many—as will the lack of diversity. I have heard the exact same points made about not taking pensions and being willing to work for free.
Note that many of these parallels also hold between RAND and the Manhattan Project—such as the isolation, urgency and secrecy. Two quotes from Rhodes:
“[Szilard’s] deepest ambition, more profound even than his commitment to science, was somehow to save the world.” (p. 20)
“This informal collegiality partly explains the feeling among scientists of Szilard’s generation of membership in an exclusive group, almost a guild, of international scope and values” (p. 25)
Three takeaways
By this point, I hope the similarities with our current situation are jumping out at you. To make things more explicit:
In general, the AI risk community is very concerned about AI races (Cave & Ó hÉigeartaigh, 2018; Armstrong et al, 2013). Why would AI risk experts—who argued for AI risk when it was an early, fringe belief; have dedicated large amounts of money and talent to AI governance and alignment research; and are particularly concerned about the dangers of racing—ever advocate for and participate in an AGI sprint?
I am concerned that at some point in the next few decades, well-meaning and smart people who work on AGI research and development, alignment and governance will become convinced they are in an existential race with an unsafe and misuse-prone opponent. They might perceive that there is an ‘AGI gap’: that the opponent has some non-negligible (>10%) chance of being ‘first’ to AGI. They will therefore advocate for and participate in a ‘sprint’ to AGI (e.g. with a yearly budget of 0.4% of GDP, or ~$84bn). This advocacy could be the equivalent of the Einstein-Szilard Letter or the Gaither Report.
However, this could be based on a mistake—the opponent may not be racing and there may be no AGI gap. If so, the main consequence of such a sprint would be to unilaterally speed up the dangerous development of AGI (with safety and structural problems unsolved), increasing existential risk. If mistaken like this, we are likely to view our advocacy and participation as the greatest mistake of our careers.
The most likely way this could manifest is a US (or USA and close allies) project motivated by the fear of a Chinese project. This is implicit in current claims from some in the AI risk community (that I do not deny) on the lack of alignment researchers in China (safety risk) and the undesirability of authoritarian AI development (misuse risk). To a lesser and near-term extent, it can be seen in some current DC rhetoric (from e.g. Senator Tom Cotton) about the risk of China and the existence of an AI arms race.
I draw three general lessons: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.
Make sure you’re actually in a race
Before any sprint is advocated for or participated in, we should be highly confident that there is a rival sprint occurring. The importance of this point is generally accepted in the AI risk community, but is worth underlining. Information about whether one is actually in a race is very important – accurate information on the Nazis in 1942 or the Soviets in 1957 could have avoided dangerous escalation.
In our case, evidence and data might take several forms—for example linked to key AI inputs of talent or compute. We may be able to track top talent, for example if researchers ‘go dark’ and stop publishing publicly. We may also be able to track, monitor and verify compute location and usage. We may be able to track government spending on various projects. Cyber espionage and human intelligence on rival’s officials and researchers could also provide evidence. We may also be able to track progress through open-source intelligence. More research and development on ways to obtain accurate information on whether a sprint has been launched by a rival is sorely needed.
Some important work so far on this has been done by the Center for Security and Emerging Technology, a national security think tank in DC, introducing empirical reality into the discussion and therefore deflating some of the DC AI race rhetoric. For example, their work on the USA and its allies’ dominance of the semiconductor supply chain and high-impact AI research; and their work on AI talent (demonstrating that over 85% of Chinese PhD students in the USA intend to and generally do stay in the USA).
You should also be careful when people tell you you’re in a race, as we see in my next point.
Be careful about secrecy
In both our cases, secrecy was used to sustain the gap myths and sideline those concerned about racing.
This is exemplified by the Rotblat case:
“Joseph Rotblat, after learning from a British associate in the fall of 1944 that there was no German program to deter, promptly resigned from the Manhattan Project. The only scientist to do so, Rotblat was induced, by threat of deportation, not to reveal his reasons for leaving, lest he inspire others to emulate him.” (p.29)
Leslie Groves, Director of the Manhattan Project was infamously secretive, and successfully excluded scientists from many of the targeting and use decisions. Secretary of State, James F. Byrnes, prevented the Szilard Petition from reaching Truman—it is unclear to what extent Truman had preapproved (with full knowledge of the effects) the Hiroshima bombing. After the war, many researchers such as Szilard and Oppenheimer had their security clearances revoked, were blacklisted from government projects, and were cut out of policy-making.
Secrecy also preserved the missile gap myth. In the 1950s, dissenting opinions on the missile gap from the Army and Navy were sidelined. It is unclear to this day to what extent Kennedy actually believed there was in fact a missile gap, as opposed to using it as a useful political attack on the Republicans. In 1961, the NIE revealing that the missile gap was a mistake was classified above ‘Top Secret’. One of the key effects of this was to sustain the myth of the missile gap, and the motivation for the ICBM sprint.
Some levels of secrecy are justified, for example to prevent proliferation of dangerous knowledge to opponents. But one should be careful about demands for secrecy. Secrecy can also be used to obscure the truth, sustain gap myths and sideline those with concerns about racing. Fear of losing clearances, and therefore career progression and policy influence, can be a powerful means to restrict important information and induce conformity. Demands for secrecy may sometimes be a way to keep you from knowing the full truth. It is also important to reflect on a point that Ellsberg raises. Secrecy has a certain glamour to it. It indicates belonging to a select group with special insights into what is really going on. This can be very tempting, but can mislead and distract:
“My clearances had been my undoing. And not only mine. Precisely because we were exposed to secret intelligence estimates [...] I and my colleagues at the RAND Corporation were preoccupied in the late fifties with the urgency of averting nuclear war by deterring a Soviet surprise attack that would exploit an alleged “missile gap.” That supposed dangerous US inferiority was exactly as unfounded in reality as the earlier Manhattan Project fear of a Nazi crash bomb program had been [...]
Working conscientiously, obsessively, on a wrong problem, countering an illusory threat, I and my colleagues at RAND had distracted ourselves and helped distract others from dealing with real dangers posed by the mutual superpower pursuit of nuclear weapons—dangers which we were helping make worse—and from real opportunities to make the world more secure. Unintentionally, yet inexcusably, we made our country and the world less safe.” (p. 296)
Scientists have a lot of power! Don’t give it up easily
As noted above, a US sprint to the Bomb may not have occured when it did without the advocacy of top nuclear scientists, and would not have been successful without the participation of those experts. The US sprint to ICBMs would also have not occurred without the advocacy of many intelligence, military and scientific experts (for example in the Gaither Report), and their subsequent participation.
Similarly, an AGI sprint may not occur without the advocacy of top AI scientists, and will not succeed without the participation of those experts. At a smaller level, researchers being “overwhelmingly opposed” to working on Lethal Autonomous Weapons (LAWS) (Zhang et al. 2021) has meaningfully slowed the USA’s drive towards LAWS.
Final thoughts
Finally, I want to return to the character of the Manhattan Project scientists. These were very good people, heroes even. Several of them kept the idea of a nuclear chain reaction secret throughout the 1930s; they worked incredibly hard (for what they thought was necessary) during WW2; and many of them drew attention to the dangers of nuclear weapons after the war, at heavy cost to their careers. They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons. They were also amongst the smartest of their generation. Nevertheless, they were convinced by a mistake.
Our current generation is vulnerable to also being convinced by a mistake. Not to put too fine a point on it, you the reader are not smarter than Einstein, Fermi and Oppenheimer. You’re not smarter than Kahn, Wohlstetter, von Neumann, the developers of game theory—the “best and the brightest, the “Whiz Kids”. They were mistaken, and you could be too.
More generally, we are not in a completely unique, unprecedented situation. We don’t need to figure everything out from first principles. We can and must learn from previous generations, and their mistakes.
(Thanks to colleagues at CSER, CFI, GovAI and Rethink Priorities – especially Di Cooke, Matthijs Maas, Helen Toner, Alex Lintz and Markus Anderljung – for feedback.)
As of 2022-08-04, the certificate of this article is owned by Haydn Belfield (100%).
- Let’s think about slowing down AI by 22 Dec 2022 17:40 UTC; 549 points) (LessWrong;
- Let’s think about slowing down AI by 23 Dec 2022 19:56 UTC; 334 points) (
- Winners of the EA Criticism and Red Teaming Contest by 1 Oct 2022 1:50 UTC; 226 points) (
- 3 Aug 2024 0:14 UTC; 174 points) 's comment on The EA case for Trump 2024 by (
- What I learned from the criticism contest by 1 Oct 2022 13:39 UTC; 170 points) (
- Announcing: EA Forum Podcast – Audio narrations of EA Forum posts by 5 Dec 2022 21:50 UTC; 154 points) (
- Remembering Joseph Rotblat (born on this day in 1908) by 5 Nov 2024 0:51 UTC; 90 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- Announcing Insights for Impact by 4 Jan 2023 7:00 UTC; 80 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- Red-teaming Holden Karnofsky’s AI timelines by 25 Jun 2022 14:24 UTC; 58 points) (
- Why policymakers should beware claims of new “arms races” (Bulletin of the Atomic Scientists) by 14 Jul 2022 13:38 UTC; 55 points) (
- 30 Aug 2022 20:41 UTC; 42 points) 's comment on How might we align transformative AI if it’s developed very soon? by (
- The Rival AI Deployment Problem: a Pre-deployment Agreement as the least-bad response by 23 Sep 2022 9:28 UTC; 42 points) (
- Reflection Mechanisms as an Alignment target: A survey by 22 Jun 2022 15:05 UTC; 32 points) (LessWrong;
- Stress Externalities More in AI Safety Pitches by 26 Sep 2022 20:31 UTC; 31 points) (
- 5 Oct 2022 14:39 UTC; 24 points) 's comment on Winners of the EA Criticism and Red Teaming Contest by (
- Summary of 80k’s AI problem profile by 1 Jan 2023 7:48 UTC; 19 points) (
- 1 May 2023 20:17 UTC; 17 points) 's comment on [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by (
- Information in risky technology races by 2 Aug 2022 23:35 UTC; 15 points) (
- 3 Jul 2024 1:09 UTC; 12 points) 's comment on An AI Race With China Can Be Better Than Not Racing by (LessWrong;
- Summary of 80k’s AI problem profile by 1 Jan 2023 7:30 UTC; 7 points) (LessWrong;
- 7 Dec 2022 18:31 UTC; 4 points) 's comment on Winners of the EA Criticism and Red Teaming Contest by (
- 5 Nov 2022 14:55 UTC; 4 points) 's comment on All AGI Safety questions welcome (especially basic ones) [~monthly thread] by (
- 29 Mar 2023 22:36 UTC; 1 point) 's comment on FLI open letter: Pause giant AI experiments by (LessWrong;
- 8 Oct 2022 4:29 UTC; 1 point) 's comment on Analysis: US restricts GPU sales to China by (LessWrong;
- 8 Jul 2022 11:19 UTC; 0 points) 's comment on Linkpost: The Scientists, the Statesmen, and the Bomb by (
This was a very interesting post. Thank you for writing it.
I think it’s worth emphasizing that Rotblat’s decision to leave the Manhattan Project was based on information available to all other scientists in Los Alamos. As he recounts in 1985:
That so many scientists who agreed to become involved in the development of the atomic bomb cited the need to do so before the Germans did, and yet so few chose to terminate their involvement when it had become reasonably clear that the Germans would not develop the bomb provides an additional, separate cautionary tale besides the one your post focuses on. Misperceiving a technological race can, as you note, make people more likely to embark on ambitious projects aimed at accelerating the development of dangerous technology. But a second risk is that, once people have embarked on these projects and have become heavily invested in them, they will be much less likely to abandon them even after sufficient evidence against the existence of a technological race becomes available.
Thanks Pablo for those thoughts and the link—very interesting to read in his own words.
I completely agree that stopping a ‘sprint’ project is very hard—probably harder than not beginning one. The US didn’t slow down on ICBMs in 1960-2 either.
We can see some of the mechanisms by which this occurs around biological weapons programs. Nixon unilaterally ended the US one; Brezhnev increased the size of the secret Soviet one. So in the USSR there was a big political/military/industrial complex with a stake in the growth of the program and substantial lobbying power, and it shaped Soviet perceptions of ‘sunk costs’, precedent, doctrine, strategic need for a weapons technology, identities and norms; while in the US the oppossite occured.
Hi Haydn,
This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts’ underperformance in the forecasting tournaments, and I think there might be something to that explanation.
We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.
What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.
I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes was that a murderous totalitarian regime would be the first to develop nuclear weapons, become a permanent regional hegemon, or worse, a global superpower. I think the atomic scientists’ and U.S. leadership’s decision then was the right one, given their uncertainties at the time.
I think it would be especially interesting to see whether misperception is actually more common historically. But I think there are examples of “racing” where assessments were accurate or even under-confident (as you mention, thermonuclear weapons).
Thanks again for writing this! I think you raise a really important question — when is AI competition “suboptimal”?[2]
https://www.jstor.org/stable/43785861
In Charles Glaser’s sense (https://www.belfercenter.org/sites/default/files/files/publication/glaser.pdf)
Thanks for the kind words Christian—I’m looking forward to reading that report, it sounds fascinating.
I agree with your first point—I say “They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons.” Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn’t stop the arms buildup.
The question of whether over, under or calibrated confidence is more common is an interesting one that I’d like someone to research. It perhaps could be usefully narrowed to WWII & postwar USA. I offered some short examples, but this could easily be a paper. There are some theoretical reasons to expect overconfidence, I’d think: such as paranoia and risk-aversion, or political economy incentives for the military-industrial complex to overemphasise risk (to get funding). But yes, an interesting open empirical question.
Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)
Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions
Given this, is it accurate to call Einstein’s letter a ‘tragedy’? The tragic part was continuing the nuclear program after the German program was shut down.
Thanks for this post Haydn, it nicely pulls together the different historical examples often discussed separately and I think points to a real danger.
This post makes argues for three takeaways—“Make sure you’re actually in a race”, “Be careful about secrecy”, and that scientists shouldn’t give up power easily—primarily based on two case studies where people overestimated the extent to which they were in a technological race against their enemies.
However, in the middle of the post there is a caveat:
This seems noteworthy, since it’s some evidence that people don’t have a general bias towards overestimating their enemies. However, these cases are, as far as I can see, not discussed in the discussion about takeaways.
I think it can be tricky to draw general lessons from case studies. In this case, it seems that both of the cases in focus supported a particular view, whereas cases that supported the opposite view were only briefly mentioned; and did not affect the conclusions. I think it could be better to give an overview of all relevant historical examples and try to establish what the overall pattern is (whether people generally overestimate their enemies, and the extent to which they’re in a race, or not).
Hi Stefan,
Thanks for this response.
You’re quite right that if this post were arguing that there is an overall pattern, it would quite clearly be inadequate. It doesn’t define the universe of cases or make clear how representative these cases are of that universe, the two main studies could be criticised for selecting on the dependent variable, and its based primarily on quotes from two books.
However, I didn’t set out to answer something like the research question “which is more common in 20th century history, mistakenly sprinting or mistakenly failing to sprint?”—though I think that’s a very interesting question, and would like someone to look into it!
My intention for this blog post was for it to be fairly clear and memorable, aimed at a general audience—especially perhaps a machine learning researcher who doesn’t know much about history. The main takeaway I wanted wasn’t for people to think “this is the most common/likely outcome” but rather to add a historic example to their repertoire that they can refer to—“this was an outcome”. It was supposed to be a cautionary tale, a prompt to people to think not “all sprints are wrong” but rather “wait am I in an Ellsberg situation?”—and if so to have some general, sensible recommendations and questions to ask.
My aim was to express a worry (“be careful about mistaken sprints”) and illustrate that with two clear, memorable stories. There’s a reasonable scenario in the next few decades that we’re in a situation where we feel we need to back a sprint, prompted by concern about another group/country’s sprint. If we do, and I’m not around to say “hey lets be careful about this and check we’re actually in a race” then I hope these two case studies may stick in someone’s mind and lead them to say “OK but lets just check, don’t want to make the same mistake as Szilard and Ellsberg...”
I agree with your points on making sure you’re in a race and being careful about secrecy, but I don’t understand:
From my perspective it seems like the scientists wielded their power very effectively rather than “giving it up”. They just happened to wield the power in service of the wrong goal, due to mistaken beliefs about the state of reality.
Perhaps to frame it differently: what does it look like to not give up your power as a scientist?
Thanks Rohin. Yes I should perhaps have spelled this out more. I was thinking about two things—focussed on those two stages of advocacy and participation.
1. Don’t just get swept up in race rhetoric and join the advocacy: “oh there’s nothing we can do to prevent this, we may as well just join and be loud advocates so we have some chance to shape it”. Well no, whether a sprint occurs is not just in the hands of politicians and the military, but also to a large extent in the hands of scientists. Scientists have proven crucial to advocacy for, and participation in, sprints. Don’t give up your power too easily.
2. You don’t have to stay if it turns out you’re not actually in a race and you don’t have any influence on the sprint program. There were several times in 1945 when it seems to me that scientists gave up their power too easily—over when and how the bomb was used, and what information was given to the US public. Its striking that Rotblat was the only one to resign—and he was leant on to keep his real reasons secret.
One can also see this later in 1949 and the decision to go for the thermonuclear bomb. Oppenheimer, Conant, Fermi and Bethe all strongly opposed that second ‘sprint’ (“It is neccessarily an evil thing considerd in any light.”). They were overruled, and yet continued to actively participate in the program. The only person to leave the program (Ellsberg thinks, p.291-296) was Ellsberg’s own father, a factory designer—who also kept it secret.
Exit or the threat of exit can be a powerful way to shape outcomes—I discuss this further in Activism by the AI Community. Don’t give up your power too easily.
Cool, that makes sense, thanks!
Great article. At least the Manhattan scientists weren’t working on the bomb because of the difficulty getting jobs in arms control...
What do you mean by this?
I think “working on the bomb” refers to working towards AGI, and “jobs in arms control” to jobs whose goal is positively shaping the development of AI.
This is correct (wiz’s post was originally remarked by them in a private convo with me)
There’s also the parallels in biosecurity, though in these cases the West was not escalating, and others misread the strategic threat. Other than the soviet program, which you mentioned, the obvious case is the Japanese program was supposedly started because westerners were writing about the dangers of pathogens in warfare, and how it should be banned, and the Japanese assumed they had programs pre-WWII, so developed at tested bioweapons at a mass scale in China.
See also the following posts, published a few months after this one, which discuss AGI race dynamics (in the context of a fictional AI lab named Magma):
‘AI strategy nearcasting’ (Karnofsky)
‘How might we align transformative AI if it’s developed very soon?’ (Karnofsky)
‘Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover’ (Cotra)
I draw two more conclusions from this excellent post:
That we should avoid strategizing around “pivotal acts.”
That AGI labs should avoid excusing their research into AGI by just assuming that others are probably even closer to AGI so that it doesn’t make a difference that they are racing toward AGI too.
Thank you, and I agree on both counts.
Thanks for writing this, I found it very insightful! I just watched ‘The Day After Trinity’ over the weekend and one thing that stood out to me was that once the machinery of the Manhattan program was in motion it seemed like there was no stopping it. Relevant section of Robert Wilson and Frank Oppenheimer talking about it
Thanks! And thanks for this link. Very moving on their sense of powerlessness.
The possibility of better weapon governance (with what impact?) in exchange for an increased risk of Nazi, USSR, or Japanese dominance during a total war seems like a bad tradeoff.
How would the strategy of delaying development have been pitched during a total war? How would the development have been done instead? It’s hard to imagine the counterfactual here.
Thanks for these questions! I tried to answer your first in my reply to Christian.
On your second, “delaying development” makes it sound like the natural outcome/null hypothesis is a sprint—but its remarkable how the more ‘natural’ outcome was to not sprint, and how much effort it took to make the US sprint.
To get initial interest at the beginning of the war required lots of advocacy from top scientists, like Einstein. Even then, the USA didn’t really do anything from 1939 until 1941, when an Australian scientist went to the USA, persuaded US scientists and promised that Britain would share all its research and resources. Britain was later cut out by the Americans, and didn’t have a serious independent program for the rest of the war. Germany considered it in the early war, but decided against in 1942. During the war, neither the USSR nor Japan had serious programs (and France was collaborating with Germany). All four major states (UK, Germany, USSR, Japan) realised it would cost a huge amount in terms of money, people and scarce resources like iron, and probably not come in time to affect the course of the war.
The counterfactual is just “The US acts like the other major powers of the time and decides not to launch a sprint program that costs 0.4% of GDP during a total war, and that probably won’t affect who wins the war”.
The big difference is Japan doesn’t even exist as a nation or culture due to Operation Downfall, starvation and insanity. The reason is without nukes, the invasion of Japan would begin, and one of the most important characteristics they had is both an entire generation under propaganda, which is enough to change cultural values, and their near fanaticism of honorable death. Death and battle was frankly over glorified in Imperial Japan, and soldiers would virtually never surrender. The result is the non existence of Japan in several years.
This assumes nuclear weapon caused Japan to surrender, and without nuclear weapon Japan would not have surrendered. Such assumption is plausible but by no means certain.
For people unfamiliar with this debate, I consider Debate over the Japanese Surrender a good introduction.
The surrender was really the Emperor having a way out, and giving “the most cruel bomb” statement via a discontinuous power scale. Even so, a group of 20 year olds tried to continue the war, and the reason it failed was the Emperor chose surrender, and to Japan, the Emperor was basically as important as the God Emperor of Mankind has in the Imperium of Man from 40k. Japan had up to this point despite steadily getting worse still couldn’t surrender, and I think it was the fact that everything got worse continuously, there was no moment where it was strong enough as a rupture moment to force surrender into their heads.
I’ll grant you this though, this isn’t inevitable as a scenario. Obviously without hindsight and nukes it’s really hard to deal with, but it may not happen at all.
Thanks for this very illuminating post.
One thing:
Most people who’ve thought about AI risk, I think, would agree that most of the risk comes not from misuse risk, but from accident risk (i.e., not realizing the prepotent AI one is deploying is misaligned).[1] Therefore, being convinced the opponent is misuse-prone is actually not necessary, I don’t think, to believe one is in an existential race. All that’s necessary is to believe there is an opponent at all.
I’d define a prepotent AI system (or cooperating collection of systems) as one that cannot be controlled by humanity, and which is at least as powerful as humanity as a whole with respect to shaping the world. (By this definition, such an AI system need not be superintelligent, or even generally intelligent or economically transformative. It may have powerful capabilities in a narrow domain that enable prepotence, such as technological autonomy, replication speed, or social manipulation.)
An AGI war machine is different than nuclear weapons in a few important ways: a) they risk blowback (and indeed existential blowback)--somewhat like biological WMDs (and this should provide common ground for transparancy and regulation), b) an AGI weapon is vastly more difficult to construct which is and should continue to buy time to develop cooperation that wasn’t available for the nuclear threat, and c) a MAD scenario may not occur as one AGI may be able to neutralize other AGIs without incurring a grave cost (the lack of relative short-term security MAD provides may incentivize cooperation). One could argue that the AGI threat may prove to be more like the trajectory global warming mitigation is on rather than nuclear weapons development in the sense that decades of tireless advocacy will lead the way towards increasing public awareness followed by prioritization of the highest level followed by an uncommonly high degree of multinational cooperation. All of which is to say, I suspect nuclear weapons development may not be the most instructive of comparisons.
”Finally, I want to return to the character of the Manhattan Project scientists. … Nevertheless, they were convinced by a mistake.”
This isn’t a comprehensive survey and there is a possibility that most of them, for what it’s worth, thought it was the intelligent course of action given the information available to them at the time or perhaps even with hindsight. As well, there is the possibility that Einstein and others were mistaken in thinking they made a mistake (such as, perhaps, when Einstein removed the cosmological constant from GR). If the US hadn’t taken the lead, there is the possibility that a nation such as the USSR may have eventually developed them first and utilized these weapons in a brutal empire building campaign. Appeals to authority, I feel, should be made very carefully.
Nitpick: The links in the ToC are gdocs links, not internal Forum section refs.
woops thanks for catching—have cut
I wonder if you have some addendums to the point of secrecy and the AI safety and EA community’s thoughts about info hazards. Are we building a community that automatically believes in both the risks and the competition being higher because organizations (e.g. MIRI) shout wolf while keeping why they shout wolf relatively secret (i.e. their experiments in making aligned AI). I don’t know what my own opinion on this is, but would you argue for a more open policy, given these insights?
Thanks for this. I’m more counselling “be careful about secrecy” rather than “don’t be secret”. Especially be careful about secret sprints, being told you’re in a race but can’t see the secret information why, and careful about “you have to take part in this secret project”.
On the capability side, the shift in AI/ML publication and release norms towards staged release (not releasing full model immediately but carefully checking for misuse potential first), structured access (through APIs) and so on has been positive, I think.
On the risks/analysis side, MIRI have their own “nondisclosed-by-default” policy on publication. CSER and other academic research groups tend towards more of a “disclosed-by-default” policy.
Is it accurate to say that the US and Germans were in a nuclear weapons race until 1942? So perhaps the takeaway is “if you’re in a race, make sure to keep checking that the race is still on”.
I think the crucial thing is funding levels.
It was only by October 1941 (after substantial nudging from the British) that Roosevelt approved serious funding. As a reminder, I’m particularly interested in ‘sprint’ projects with substantial funding: for example those in which the peak year funding reached 0.4% of GDP (Stine, 2009, see also Grace, 2015).
So to some extent they were in a race 1939-1942, but I would suggest it wasn’t particularly intense, it wasn’t a sprint race.
I suppose sprints start out as jogs.
Excellent piece.
Did the RAND scientists never entertain the possibility that they were being used by the industrialists who stood to benefit from every “sprint” and who may have had a hand in providing them the raw data supporting “mistaken” assessments of enemy capabilities? Perhaps calling these assessments “mistakes” might be a face-saving way of admitting to having been manipulated by people less technically brilliant?
Thanks!
This was very much Ellsberg’s view on eg the 80,000 Hours podcast:
I would like to contact Daniel Ellsberg and ask what he thinks about the amount of good that could have been done from an aligned Manhattan project member using their knowledge of the project to advance safety. I expect the answer will be less than the harm caused by participating, whereas @richard_ngo thinks otherwise. Anyone have any recommendations on how I might do this? The only thing I found online was his publicist.
What is “LAWS”? I have not been able to get useful results from Google, and the paper’s abstract was not illuminating.
Apologies! LAWS = Lethal Autonomous Weapons. Have edited the text.
Two criticisms:
On two occasions you referred to nuclear war as an “existential risk”. It’s not. You also referred to 1970s-tier bioweapons as an “existential risk”; they weren’t. Both are GCRs but not X; there have never been enough nukes to kill all humans and even infectious diseases will have R drop below 1 before population density drops to 0. We are at a point now where biotechnology is beginning to pose notable X-risk, but we weren’t then.
You mentioned that the communities you reference, and EA/Rats, are overwhelmingly male, but you do not make any actual argument about how this is relevant. Do remember that a non-trivial fraction of Rats are not feminists, and this pings their “hostile politics” detectors (as does the editing of the quote from “men” to “people”); that’s a loss in persuasiveness, which should be avoided unless you need it to make some sort of point.
While this is a well written post full of fascinating historical details...
I remain persuaded that academia’s passion for immersion in the deep analysis of countless details is obscuring simpler more important truths. And I really do apologize for repeating this, but...
Unless and until the pace of the knowledge explosion is brought under control so as to match the human ability to effectively respond to emerging threats....
None of this matters.
We can analyze the past, speculate about the future, dive in to the technical details of every emerging threat and so on as much as we want. But so long as the source of all these threats continues to produce ever more threats at an ever accelerating pace, we are collectively headed for destruction.
So much, almost all of the analysis on this site, seems based on the assumption that if we are just smart enough, if we just analyze carefully enough, we can somehow manage whatever pops out of the ever accelerating knowledge explosion. That is a false assumption.
I’m asking readers to trade in your sophistication for clear minded simplicity. As a place to start, focus on what the word “accelerating” actually means. It means faster, and faster, and faster.
So long as knowledge development is feeding back upon itself, resulting in an ever accelerating rate of knowledge and power development...
Nothing on this site matters.
If we are going to accept an ever accelerating knowledge explosion as a given, then we would be wiser to buy a surfboard, head to the beach with our favorite friend, and focus on enjoying civilization while we still have it.
Perhaps one could just bite the bullet and vow to never work on known dangerous tech ever, even if a race is possible?
Maybe the risk of losing to a totalitarian regime due to lack of superweapon advantage is an acceptable cost of lowering FUCKING EXISTENTIAL RISK?
Maybe it’s the pinnacle of human hubris to think that your specific brand of politics is worth gambling the existence of human civilization over.
I mean, the idea that superweapons can alter a major war has never had historical evidence, at all. I honestly think that it’s mostly the pride of scientists fueling that fantasy. You can see this trope in modern fiction. Many times the protagonist in a war story will come up with a gadget or tactic and that leads to winning a war… but any military historian would tell you that such things are unlikely to make a difference in real life. Why do we have this trope? The power-fantasy of the individual! It’s just like the fantasy of being a superhero with superhuman powers, winning wars single-handedly, except dressed up in a futile attempt at realism!
The AI scenario is a bit different if you believe in FOOM, but in that case it’s more likely to be suicidal than a targeted weapon anyway so you should still disavow it.
If there is no FOOM and it progresses more like how Robin Hanson envisions it, then past experience shows us that there’s more to winning a war than simply having a new technology.