In The Precipice, Toby Ord mentions the possibility of “a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)” (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that “Such a ‘doomsday device’ was first suggested by Leo Szilard in 1950″. Wikipedia similarly says:
The concept of a cobalt bomb was originally described in a radio program by physicist Leó Szilárd on February 26, 1950. His intent was not to propose that such a weapon be built, but to show that nuclear weapon technology would soon reach the point where it could end human life on Earth, a doomsday device. Such “salted” weapons were requested by the U.S. Air Force and seriously investigated, but not deployed.[citation needed] [...]
That’s the extent of my knowledge of cobalt bombs, so I’m poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom’s subtypes of information hazards:
Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already “known”.
Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary’s attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons—as distinct from, say, conventional explosives or chemical weapons—constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary.
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised—or at least not acted on—the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didn’t discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states:
Leó Szilárd patented the nuclear chain reaction in 1934. He then asked the British War Office to hold the patent in secret, to prevent the Germans from creating nuclear weapons (Section 2.1). After the discovery of fission in 1938, Szilárd tried to convince other physicists to keep their discoveries secret, with limited success.
Collection of all prior work I found that seemed substantially relevant to information hazards
Information hazards: a very simple typology—Will Bradshaw, 2020
Information hazards and downside risks—Michael Aird (me), 2020
Information hazards—EA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
Bioinfohazards—Crawford, Adamson, Ladish, 2019
Information Hazards—Bostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous information—Davis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical—Gentzel, 2018
Horsepox synthesis: A case of the unilateralist’s curse? - Lewis, 2018
Mitigating catastrophic biorisks—Esvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazard—LW Wiki
Thoughts on The Weapon of Openness—Will Bradshaw, 2020
Exploring the Streisand Effect—Will Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks—Alexey Turchin, 2018
A point of clarification on infohazard terminology—eukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? - Shevlane & Dafoe, 2020 (commentary here)
The Vulnerable World Hypothesis—Bostrom, 2019 (footnotes 39 and 41 in particular)
Managing risk in the EA policy space—weeatquince, 2019 (touches briefly on information hazards)
Strategic Implications of Openness in AI Development—Bostrom, 2017 (sort-of relevant, though not explicitly about information hazards)
[Review] On the Chatham House Rule (Ben Pace, Dec 2019) - Pace, 2019
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Interesting example: Leo Szilard and cobalt bombs
In The Precipice, Toby Ord mentions the possibility of “a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)” (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that “Such a ‘doomsday device’ was first suggested by Leo Szilard in 1950″. Wikipedia similarly says:
That’s the extent of my knowledge of cobalt bombs, so I’m poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom’s subtypes of information hazards:
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised—or at least not acted on—the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didn’t discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states: