In The Precipice, Toby Ord mentions the possibility of âa deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)â (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that âSuch a âdoomsday deviceâ was first suggested by Leo Szilard in 1950âł. Wikipedia similarly says:
The concept of a cobalt bomb was originally described in a radio program by physicist LeĂł SzilĂĄrd on February 26, 1950. His intent was not to propose that such a weapon be built, but to show that nuclear weapon technology would soon reach the point where it could end human life on Earth, a doomsday device. Such âsaltedâ weapons were requested by the U.S. Air Force and seriously investigated, but not deployed.[citation needed] [...]
Thatâs the extent of my knowledge of cobalt bombs, so Iâm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostromâs subtypes of information hazards:
Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already âknownâ.
Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversaryâs attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weaponsâas distinct from, say, conventional explosives or chemical weaponsâconstitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary.
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognisedâor at least not acted onâthe possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didnât discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/âKatja Grace states:
LeĂł SzilĂĄrd patented the nuclear chain reaction in 1934. He then asked the British War Office to hold the patent in secret, to prevent the Germans from creating nuclear weapons (Section 2.1). After the discovery of fission in 1938, SzilĂĄrd tried to convince other physicists to keep their discoveries secret, with limited success.
Collection of all prior work I found that seemed substantially relevant to information hazards
Information hazards: a very simple typologyâWill Bradshaw, 2020
Information hazards and downside risksâMichael Aird (me), 2020
Information hazardsâEA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
BioinfohazardsâCrawford, Adamson, Ladish, 2019
Information HazardsâBostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous informationâDavis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is CriticalâGentzel, 2018
Horsepox synthesis: A case of the unilateralistâs curse? - Lewis, 2018
Mitigating catastrophic biorisksâEsvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazardâLW Wiki
Thoughts on The Weapon of OpennessâWill Bradshaw, 2020
Exploring the Streisand EffectâWill Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risksâAlexey Turchin, 2018
A point of clarification on infohazard terminologyâeukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? - Shevlane & Dafoe, 2020 (commentary here)
The Vulnerable World HypothesisâBostrom, 2019 (footnotes 39 and 41 in particular)
Managing risk in the EA policy spaceâweeatquince, 2019 (touches briefly on information hazards)
Strategic Implications of Openness in AI DevelopmentâBostrom, 2017 (sort-of relevant, though not explicitly about information hazards)
[Review] On the Chatham House Rule (Ben Pace, Dec 2019) - Pace, 2019
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Interesting example: Leo Szilard and cobalt bombs
In The Precipice, Toby Ord mentions the possibility of âa deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)â (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that âSuch a âdoomsday deviceâ was first suggested by Leo Szilard in 1950âł. Wikipedia similarly says:
Thatâs the extent of my knowledge of cobalt bombs, so Iâm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostromâs subtypes of information hazards:
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognisedâor at least not acted onâthe possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didnât discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/âKatja Grace states: