Nuclear weapons are one of the only direct means to an existential catastrophe for humanity. Other existential risk factors such as global warming, great power war, and misaligned AI could not alone pose a specific credible threat to Earth’s population of seven billion. Instead, these stories only reach human extinction through bioweapons, asteroids, or something closer to the conclusion of Gwern’s recent story about AI catastrophe:
All over Earth, the remaining ICBMs launch.
How can we engineer a safer nuclear weapons system? A few ideas:
Information security has been previously recommended as an EA career path. There’s strong reason to believe that controlling computers systems will become increasingly valuable over the next century. But traditional cybersecurity credentials might not help somebody directly work on critical systems. How can cybersecurity engineering support nuclear weapons safety?
Stanislav Petrov is only famous because he had broken missile detector. How do you prevent false alarms of a nuclear launch? Who in the American, Chinese, Russian, and European security states is working on exploiting the vulnerabilities present in each other’s systems?
Security for defense contractors of the US government. The nuts and bolts of American military security are quite literally built by Boeing, Raytheon, and other for-profit defense contractors. Do these companies have ethics boards? Do they engage with academics on best practices, or build internal teams to work on safety? What if they received funding from FTX’s Future Fund -- what safety projects would they be willing to consider?
Here’s a US Senate hearing on detecting smuggled nuclear weapons. Nuclear non-proliferation is one of the most common forms of advocacy on the topic. How can non-proliferation efforts be improved by technological security of weapons systems? How could new groups acquire nuclear weapons today, and how could we close those holes?
Security engineering is only one way to improve nuclear safety. Advocacy, grantmaking, and other non-technical methods can advance nuclear security. On the other hand, we’ve seen major grant makers withdraw from the area thanks to insufficient results. It’s my impression that, compared to political methods, engineering has been relatively underexplored as a means to nuclear security. Perhaps it will be seen as significantly important by those focused on the risks of misaligned artificial intelligence.
In the 2016 report where 80,000 Hours declared nuclear security a “sometimes recommended” path for improving the world, they note a key cause for concern: “This issue is not as neglected as most other issues we prioritize. Current spending is between $1 billion and $10 billion per year.” In 2022, with longtermist philanthropy looking to deploy billions of dollars over the next decade, do we still believe nuclear security engineering is too ambitious to work on?
Career Path: Nuclear Weapons Security Engineering
Nuclear weapons are one of the only direct means to an existential catastrophe for humanity. Other existential risk factors such as global warming, great power war, and misaligned AI could not alone pose a specific credible threat to Earth’s population of seven billion. Instead, these stories only reach human extinction through bioweapons, asteroids, or something closer to the conclusion of Gwern’s recent story about AI catastrophe:
How can we engineer a safer nuclear weapons system? A few ideas:
Information security has been previously recommended as an EA career path. There’s strong reason to believe that controlling computers systems will become increasingly valuable over the next century. But traditional cybersecurity credentials might not help somebody directly work on critical systems. How can cybersecurity engineering support nuclear weapons safety?
Stanislav Petrov is only famous because he had broken missile detector. How do you prevent false alarms of a nuclear launch? Who in the American, Chinese, Russian, and European security states is working on exploiting the vulnerabilities present in each other’s systems?
Security for defense contractors of the US government. The nuts and bolts of American military security are quite literally built by Boeing, Raytheon, and other for-profit defense contractors. Do these companies have ethics boards? Do they engage with academics on best practices, or build internal teams to work on safety? What if they received funding from FTX’s Future Fund -- what safety projects would they be willing to consider?
Here’s a US Senate hearing on detecting smuggled nuclear weapons. Nuclear non-proliferation is one of the most common forms of advocacy on the topic. How can non-proliferation efforts be improved by technological security of weapons systems? How could new groups acquire nuclear weapons today, and how could we close those holes?
The 80,000 Hours Podcast had an excellent conversation with Daniel Ellsberg about his book The Doomsday Machine. They’ve also spoken with ALLFED, which is working on a host of engineering solutions to existential risk problems. Here is ALLFED’s job board.
Security engineering is only one way to improve nuclear safety. Advocacy, grantmaking, and other non-technical methods can advance nuclear security. On the other hand, we’ve seen major grant makers withdraw from the area thanks to insufficient results. It’s my impression that, compared to political methods, engineering has been relatively underexplored as a means to nuclear security. Perhaps it will be seen as significantly important by those focused on the risks of misaligned artificial intelligence.
The Department of Energy manages the American nuclear stockpile. Here is their job board. Here is a recommendation report prepared by the International Atomic Energy Agency. The Johns Hopkins University Applied Physics Laboratory, a prominent defense contractor for the US where an EA recently received a grant to work on AI safety internally, also works on nuclear safety. What other governmental organizations are setting the world’s nuclear security policies?
In the 2016 report where 80,000 Hours declared nuclear security a “sometimes recommended” path for improving the world, they note a key cause for concern: “This issue is not as neglected as most other issues we prioritize. Current spending is between $1 billion and $10 billion per year.” In 2022, with longtermist philanthropy looking to deploy billions of dollars over the next decade, do we still believe nuclear security engineering is too ambitious to work on?
Fun fact: For 20 years at the peak of the Cold War, the US nuclear launch code was “00000000”
https://gizmodo.com/for-20-years-the-nuclear-launch-code-at-us-minuteman-si-1473483587
H/t: Gavin Leech