I’m Will Aldred, and I’ve written this post in my role as Nuclear Risk Programme Coordinator at CERI. This is my very first post on the EA Forum; I very much welcome comments and feedback.
This post introduces the nuclear risk cause area stream within the Cambridge Existential Risks Initiative (CERI) summer research fellowship (SRF) 2022. Potential applicants: for an overview of the CERI SRF, please see our SRF announcement post, and to apply (April 3rd deadline), go here. Potential mentors: please complete this short form if you’re interested in supervising a research project in this programme, and we’ll be in touch with you in due course.
The post is split into two halves. First, I will give some details specific to the nuclear risk SRF. Second, I give a brief summary of the nuclear risk landscape and how I see CERI as hopefully contributing toward solving (or at least mitigating) the nuclear risk problem.
Some of the highest-impact projects a CERI fellow could undertake are listed in ‘Nuclear risk research ideas’ (Aird & Aldred, 2022a). And for many of these research ideas, accompanying guidance is given on how one might begin tackling them, which makes this possibly one of the best resources for nuclear risk project inspiration.
But there are other projects one could undertake, and I do not wish to limit applicants to only considering project ideas from the above list if one feels they are a better fit for another project they’d like to propose, and/or if one holds a strong inside view that their own project idea scores highly for impact.
Some reasons for doing a nuclear risk project
This section is mainly aimed at generalist-type potential applicants on the fence between nuclear risk and another cause area stream.
Before I dive into giving reasons for why one might consider doing a summer project in nuclear risk, I’ll first state two reasons against: personal fit and higher impact elsewhere. ‘Personal fit: why being good at your job is even more important than people think’ (Todd, 2021a) is my recommendation for those interested in reading more on personal fit. As for higher impact elsewhere, my take is that potential applicants who are a good fit for nuclear risk research likely also have good fit for AI strategy and governance, biorisk policy, and possibly other highest-impact areas.[1] I shall return to the interrelation I perceive between nuclear risk and AI governance research pipelines later on in this section, in my third and final reason for considering a summer project in nuclear risk.[2] I now discuss my reasons in order.
Low-hanging fruit via neglectedness
Work going into nuclear risk reduction within the existential risk (x-risk) community has been neither large in quantity nor diverse in scope: nuclear x-risk has around 12 full-time equivalents (FTEs), and around two-thirds of this total comes from one organisation—ALLFED (Aird & Aldred, 2022b). Therefore, there are still low-hanging fruit to be picked. And a junior-level researcher working on a nuclear risk project over a summer is probably, all else equal and relative to working in another x-risk cause area, more likely to produce something directly impactful (i.e., novel, relevant to important decisions, and easy for key decision-maker(s) to use). See ‘Nuclear risk research ideas’ for why I think this direct impact is attainable. Note that I am not discarding the field of nuclear security outside of EA and x-risk: I discuss this in ‘Who else is working on the problem?’.
To help calibrate ourselves, Aird thinks there should be somewhere between 2 and 1000 FTEs working on nuclear x-risk a few years from now (and between $200,000 and $100 million spent per year; his tentative 90% confidence intervals; admittedly these are huge ranges!). He goes on to say, “I feel fairly confident that the actual allocation right now should be at least several times as large as those lower bounds, because we could use such labor and spending to help us work out how much to prioritize nuclear risk” (2022b, p. 11).
Getting to the top
[Caveat: epistemic status is less firm for this subsection than for the rest of this post.]
In conversation, one EA who is well informed on the Washington DC security and technology policy scene,[3] claimed (i): “It’s relatively easy to get to the forefront of the field for nuclear risk work.” And they went on to say they might even (ii) recommend that people wanting to work on AI safety and international security start out at the intersection of nuclear risk and AI. Because the nuclear-AI intersection is currently perceived in policy circles, they say, as more credible than alignment-flavoured AI safety. And because understanding how nuclear security works might be a useful intellectual background which is applicable to other x-risk areas, such as long-term AI governance (which is a field that may be hard to understand directly since it’s so new and underdeveloped).
If (i) is true, then there are strong implications for credibility building, especially for those looking toward roles in policy and policy-adjacent think tanks. Making it to the forefront of the nuclear risk field—a field which presents compelling engagement opportunities, according to this DC EA, and which always seems to get top level policymaker attention—would place one well for talking about global catastrophic and existential risks in general, and for expanding the Overton window toward reducing other x-risks, like unaligned AI.
Skill-building for high future impact in AI governance
It’s been very hard to enter this [long-term AI strategy and governance] path as a researcher unless you’re able to become one of the top (approximately) 30 people in the field relatively quickly. While mentors and open positions are still scarce, some top organisations have recently recruited junior and mid-career staff to serve as research assistants, analysts, and fellows. Our guess is that obtaining a research position will remain very competitive but positions will continue to gradually open up. (Todd, 2021b)
(Note: I’ll abbreviate ‘strategy and governance’ to just ‘governance’ from now on, and I’ll drop the ‘long-term’ for concision, such that whenever I use ‘AI governance’ I’m referring to ‘long-term AI strategy and governance.’)
At this point, I’ll spell out my thoughts in sequence.
I’d like there to be more high-quality AI governance researchers in a few years time, because I think (largely through deferring to 80,000 Hours and Karnofsky, 2022, though with some amount of inside view) that AI governance is a hugely important cause—possibly in the top couple of cause areas.
Junior/aspiring researchers are having a hard time getting into AI governance.
It would be great if there were some way to help skill up more junior/aspiring researchers wanting to get into AI governance.
Oh wait, maybe they can do nuclear risk research now, and in doing so:
a) Have direct impact now (I’ve stated why I think this could be achieved in my ‘Low-hanging fruit’ section).
b) Skill up for doing AI governance research in the future.
Point 4b is where the action is at, and so I’ll expand on the point. Firstly, I should say that I am aware of motivated reasoning, privileging the hypothesis, and surprising and suspicious convergence, and I have thought critically about whether 4b arises out of these, and I think my point does really reflect my true belief here. And just to make clear again, this nuclear risk to AI governance pathway I’m proposing is what I think some fraction of aspiring AI governance researchers should consider, where one clear group I wouldn’t want to nudge into this fraction is those already at the level for obtaining positions in AI governance.[5]
The best exploration of aspiring effective altruist (EA)/x-risk researcher ‘goals’ for projects I’ve seen is ‘Goals we might have when taking actions to improve the EA-aligned research pipeline’ (Aird, 2021a). Below, as main bullets, is the list of goals from Aird’s post I consider most relevant to a CERI summer fellow, and as sub-bullets I expand on things relevant to my point 4b.[6]
(Meta note: I chose to create this list in this way—first picking all goals that seem most relevant to a CERI fellow, then commenting on them with respect to 4b, where I make both ‘positive (+)’ and ‘negative (‒)’ comments—in order to avoid cherry picking pieces of reasoning that support my 4b hypothesis.)
Developing research skill requires doing research. And developing x-risk research skill is, all else equal, best developed by doing x-risk research. Therefore, for building research skill in AI governance, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
(+) Transferable domain knowledge
One who carries out nuclear risk research will develop knowledge in areas such as geopolitics, international relations, and how policy, governance and coordination work. Such knowledge is applicable to AI governance research.
(‒) But domain knowledge acquired in the course of doing nuclear risk research is not optimised for doing future AI governance research.
(+) Connections with EA/x-risk-aligned researchers and decision makers come with doing EA/x-risk-aligned research. Additionally, doing nuclear risk research may result in some connections with non EA/x-risk policymakers.
(‒) But one’s network built through nuclear risk research is not an AI governance-optimized network.
(+) To test one’s fit for research, one needs to try one’s hand at research. And testing one’s fit for x-risk research is, all else equal, best done by trying one’s hand at x-risk research. Therefore, for testing one’s fit for AI governance research, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
(+) [Caveat: epistemic status here is less firm. Please comment if you believe my thinking here needs updating.] I think it may even be the case that, for aspiring/junior researchers wanting to test fit, doing nuclear risk research is better than doing AI governance research. Because there are research questions at the forefront of nuclear risk that an aspiring/junior researcher can work on now, where the path to impact and decision relevance of findings are fairly direct (see my ‘Low-hanging fruit’ section). Whereas, to my understanding, based on talking to around five people with some knowledge in the area, an aspiring/junior researcher tackling AI governance will be doing something that looks more like mapping the landscape, or assessing different possible directions for research, or doing ‘actual’ research but where the path to impact and decision relevance is fairly indirect.
(+) Perhaps the most credible signal of fit for research is strong prior research output. And, all else equal, perhaps the most credible signal of fit for x-risk research is strong prior x-risk research output. Therefore, for gaining credible signal of fit for AI governance research, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
Looking through the above list, the two ‘holes’ in doing nuclear risk research to skill-build for high future impact in AI governance lie in AI governance-specific knowledge and network-building. These holes—especially the domain knowledge one—can be addressed, though, in the CERI SRF: in the nuclear stream, alongside the Nuclear Risk seminar programme,[7] fellows will have the option to participate in the AGI Safety Fundamentals: Governance seminar programme. (The context here is that in all CERI streams, fellows will be able to participate in the corresponding EA Cambridge cause area seminar programme, if they have not done so already.)
Nuclear risk: an overview
What is the problem?
Nuclear weapons that are armed at all times could kill tens of millions of people directly (e.g., Rodriguez, 2019a), and perhaps billions of people in the subsequent effects on climate and crop growth (Toon et al., 2019; Rodriguez, 2019b). The potential for extreme climate and crop effects in a so-called ‘nuclear winter,’ possibly leading to human extinction or irreversible societal change, is why nuclear risk is part of the x-risk landscape.
Nuclear weapons have been used twice before in warfare, by the US on Hiroshima and Nagasaki in World War II. The Cuban Missile Crisis, a tense military standoff between the US and the Soviet Union in 1962, is well known as a moment when the world came close to all out nuclear war. And there have been many ‘close calls’ over the past few decades, when nuclear weapons were nearly used either deliberately or accidentally. The annual risk of nuclear conflict is estimated to be around 1% (Aird, 2022), and Joan Rohlfing, President at the Nuclear Threat Initiative (NTI), thinks the annual risk of major nuclear conflict is around 0.5%: ‘I assign about a half percent risk per year to the potential for a global catastrophic nuclear event’ (2021).
Who else is working on the problem?
There are NGOs and think tanks working on ‘mainstream’ (i.e., not explicitly x-risk-focused) nuclear security, such as RAND, Global Zero, and NTI.[8] For a more complete—though still not exhaustive—list of these organisations, see the nuclear risk ‘view’ of ‘Database of orgs relevant to longtermist/x-risk work’ (Aird, 2021b). Their approaches differ from ours. In very broad strokes, mainstream nuclear security work focuses on preventing nuclear conflict from happening at all, whereas our focus in the x-risk community is on avoiding the worst nuclear conflicts—those that carry the tail risk of existential catastrophe—and on raising the chance that humanity will recover after a nuclear catastrophe. Please note that these are indeed broad strokes, and part of why we can draw something of a clear distinction is that the work going into nuclear risk reduction within the x-risk community has been neither large in quantity nor diverse in scope: nuclear x-risk has around 12 full-time equivalents, and around two-thirds of this total comes from one organisation: ALLFED (Aird & Aldred, 2022b).
Governments also give nuclear risk a lot of funding and attention. And some of this does reduce nuclear risk, for example through non-proliferation efforts, conflict-preventing diplomacy, successful arms control agreements, and perhaps even effective deterrence.[9] Nonetheless:
Much of that government funding and attention is focused on reducing risk to a particular nation or on advancing national security, as opposed to reducing global risk, let alone global catastrophic risk or x-risk.
Much of that government funding and attention may in fact increase global risk and/or x-risk (e.g., by development of new weapons).
How does CERI plan to help solve the problem?
Research Outlook
Our research outlook on nuclear risk draws substantially from Rethink Priorities’ work in this space, and ties in strongly with ‘Nuclear risk research ideas’ (Aird & Aldred, 2022a). It can be broadly divided into three categories:
Modelling the effects of specific nuclear conflicts that might occur between pairs of states, or groups of states. For instance, conflicts between:
India and Pakistan
China and the US
Exploring uncertainties that apply across a range of possible nuclear scenarios, examples being:
What are the ways in which a nuclear war could begin and then evolve?
How does the number of people dying from starvation vary with severity of crop yield decline?
Finding, cataloging, and evaluating different intervention options—such as public advocacy, international treaties, and improving key actors’ decision making—for reducing nuclear risk.
Theory of Change
[Note: There is a lot more I could write to better explain and flesh out the high-level paths below, such that interested readers and experts in the field are better able to understand, critique and (hopefully) help improve them. Accordingly, I aim to publish a sequel post in the near future.[10]]
Path 1 (smaller but more direct path)
The first path to impact for our nuclear risk research is through:
Influencing the decisions of actors, including grantmakers, with some connection to the effective altruism (EA)/x-risk community.
Influencing the decisions of actors who have no connection to the EA/x-risk community, such as policymakers, via EA/x-risk-aligned actors.
And to do these things well requires us to first produce high quality research that improves the state of knowledge on:
How far to prioritize nuclear risk relative to other x-risk cause areas.
When we say ‘how far to prioritize’, we’re talking mainly about influencing funding decisions and career decisions (and career advice given by us and our partner, Effective Altruism Cambridge).
What specific, concrete actions to take to mitigate nuclear risk.
Path 2 (larger but more indirect path)
Raising the skill levels of aspiring/junior x-risk researchers, connecting these aspiring researchers with senior researchers and possibly decision makers (in the EA/x-risk community, and also maybe in the wider nuclear security and policy communities), and therefore building capacity for more and better nuclear risk research (and AI governance research) in the future.
Acknowledgements
This post is a project of Cambridge Existential Risks Initiative. It was written by Will Aldred. Thanks to Rudolf Laine for helpful feedback. Thanks also to Michael Aird, with whom I’ve shared many conversations, and who has helped shape my views on research pipelines, nuclear risk, and AI governance, among other things. The idea for the section, ‘Skill-building for high future impact in AI governance,’ originated from one of these conversations, and the ‘Governments’ paragraph in ‘Who else is working on the problem?’ is essentially an excerpt from another of these conversations.
Please note that I discuss here some reasons for carrying out a nuclear risk summer research project, i.e., this list isn’t exhaustive, but it does include what have occurred to me as the most important reasons.
Technical AI safety is ranked 1st, and long-term AI strategy and governance is ranked 2nd, according to the 80,000 Hours page on 23rd March, 2022. It is perhaps worth noting that 80k takes an importance, neglectedness, tractability (ITN)-based approach to assessing career path impact. Here are 80k’s caveats on their rankings: “We’ve ranked these paths roughly in terms of impact, assuming your personal fit for each is constant. But there is a lot of variation within each path — so the best opportunities in one lower on the list will often be better than most of the opportunities in a higher-ranked one.” (emphasis added)
I’m also aware that, of all possible ‘training grounds’ for long-term AI governance, nuclear risk might not be the best. Alternative training grounds include near-term/low-stakes AI issues, emerging tech policy, cybersecurity, international relations / national security, and maybe even climate change (because it involves hard coordination problems and large externalities). I thank Michael Aird for pointing out to me these other potential training grounds. An investigation of training grounds for AI governance is included here in Nuclear risk research ideas’ appendix.
Note that when I refer to ‘the counterfactual,’ the counterfactual I have in mind is an aspiring AI governance researcher who is not doing an AI governance summer research project (because these are few in number and very competitive). They’re instead doing either: a) A non x-risk related summer internship while reading up on AI governance in their spare time, or b) Lots of reading up on AI governance and some amount of independent research, though without a mentor, having successfully applied for funding to do this from, e.g., EA Funds’ Long-Term Future Fund or Open Philanthropy’s early-career funding for the long-term future.
Having spoken to a few people, these seem like the two most likely counterfactuals. Though the number of people who’d do (2.) seems much smaller than the number who’d do (1.) - I’m 80% confident the (1.):(2.) ratio would be at least 20:1.^ Thus making my counterfactual a weighted average of (1.) and (2.) seems unnecessary, and so (1.) is the counterfactual I talk about in my list in the main text. ^This was the rough consensus amongst the people I consulted. Reasons cited were:
Many of the people who’d consider (2.) if they were aware of its existence are in fact not aware of its existence, or don’t consider themselves as eligible/qualified to apply. (Which is sad.)
[Caveat: I’m less confident in the reasoning on this one.] We guess that many of those who apply to (2.) might be unsuccessful, because the fact that they didn’t land an AI governance summer research position at a research organization might be seen by grantmakers as not a great signal regarding funding them to do (2.).
Spending a summer doing (2.) scores negative social points with most non EAs, and (2.) is also not good for building a CV/resume.
This nuclear risk seminar programme does not yet exist, but I am working on having it ready by summer. I thank Michael Aird for his help in getting the curriculum started.
This future post on high-level goals in nuclear risk will be lead authored by Michael Aird. It’ll likely be included in Rethink Priorities’ Risks from Nuclear Weapons sequence, as well as the CERI SRF ’22 sequence as a sequel post to this one. (Note/Disclaimer: neither Michael nor Rethink Priorities are formally affiliated with CERI.) [Edited to add: this post, 8 possible high-level goals for work on nuclear risk, has now been published.]
Nuclear Risk Overview: CERI Summer Research Fellowship
I’m Will Aldred, and I’ve written this post in my role as Nuclear Risk Programme Coordinator at CERI. This is my very first post on the EA Forum; I very much welcome comments and feedback.
This post introduces the nuclear risk cause area stream within the Cambridge Existential Risks Initiative (CERI) summer research fellowship (SRF) 2022. Potential applicants: for an overview of the CERI SRF, please see our SRF announcement post, and to apply (April 3rd deadline), go here. Potential mentors: please complete this short form if you’re interested in supervising a research project in this programme, and we’ll be in touch with you in due course.
The post is split into two halves. First, I will give some details specific to the nuclear risk SRF. Second, I give a brief summary of the nuclear risk landscape and how I see CERI as hopefully contributing toward solving (or at least mitigating) the nuclear risk problem.
I imagine my subsection on doing nuclear research now to skill-build for future higher impact in AI governance might be seen as the most original/interesting/controversial part of this post, ergo my highlighting it at this point. I especially welcome comments and feedback relating to this subsection.
For potential applicants
Project ideas
Some of the highest-impact projects a CERI fellow could undertake are listed in ‘Nuclear risk research ideas’ (Aird & Aldred, 2022a). And for many of these research ideas, accompanying guidance is given on how one might begin tackling them, which makes this possibly one of the best resources for nuclear risk project inspiration.
But there are other projects one could undertake, and I do not wish to limit applicants to only considering project ideas from the above list if one feels they are a better fit for another project they’d like to propose, and/or if one holds a strong inside view that their own project idea scores highly for impact.
Some reasons for doing a nuclear risk project
This section is mainly aimed at generalist-type potential applicants on the fence between nuclear risk and another cause area stream.
Before I dive into giving reasons for why one might consider doing a summer project in nuclear risk, I’ll first state two reasons against: personal fit and higher impact elsewhere. ‘Personal fit: why being good at your job is even more important than people think’ (Todd, 2021a) is my recommendation for those interested in reading more on personal fit. As for higher impact elsewhere, my take is that potential applicants who are a good fit for nuclear risk research likely also have good fit for AI strategy and governance, biorisk policy, and possibly other highest-impact areas.[1] I shall return to the interrelation I perceive between nuclear risk and AI governance research pipelines later on in this section, in my third and final reason for considering a summer project in nuclear risk.[2] I now discuss my reasons in order.
Low-hanging fruit via neglectedness
Work going into nuclear risk reduction within the existential risk (x-risk) community has been neither large in quantity nor diverse in scope: nuclear x-risk has around 12 full-time equivalents (FTEs), and around two-thirds of this total comes from one organisation—ALLFED (Aird & Aldred, 2022b). Therefore, there are still low-hanging fruit to be picked. And a junior-level researcher working on a nuclear risk project over a summer is probably, all else equal and relative to working in another x-risk cause area, more likely to produce something directly impactful (i.e., novel, relevant to important decisions, and easy for key decision-maker(s) to use). See ‘Nuclear risk research ideas’ for why I think this direct impact is attainable. Note that I am not discarding the field of nuclear security outside of EA and x-risk: I discuss this in ‘Who else is working on the problem?’.
To help calibrate ourselves, Aird thinks there should be somewhere between 2 and 1000 FTEs working on nuclear x-risk a few years from now (and between $200,000 and $100 million spent per year; his tentative 90% confidence intervals; admittedly these are huge ranges!). He goes on to say, “I feel fairly confident that the actual allocation right now should be at least several times as large as those lower bounds, because we could use such labor and spending to help us work out how much to prioritize nuclear risk” (2022b, p. 11).
Getting to the top
[Caveat: epistemic status is less firm for this subsection than for the rest of this post.]
In conversation, one EA who is well informed on the Washington DC security and technology policy scene,[3] claimed (i): “It’s relatively easy to get to the forefront of the field for nuclear risk work.” And they went on to say they might even (ii) recommend that people wanting to work on AI safety and international security start out at the intersection of nuclear risk and AI. Because the nuclear-AI intersection is currently perceived in policy circles, they say, as more credible than alignment-flavoured AI safety. And because understanding how nuclear security works might be a useful intellectual background which is applicable to other x-risk areas, such as long-term AI governance (which is a field that may be hard to understand directly since it’s so new and underdeveloped).
If (i) is true, then there are strong implications for credibility building, especially for those looking toward roles in policy and policy-adjacent think tanks. Making it to the forefront of the nuclear risk field—a field which presents compelling engagement opportunities, according to this DC EA, and which always seems to get top level policymaker attention—would place one well for talking about global catastrophic and existential risks in general, and for expanding the Overton window toward reducing other x-risks, like unaligned AI.
Skill-building for high future impact in AI governance
Long-term AI strategy and governance is thought to score very highly for impact in an importance, tractability, neglectedness (ITN) framework—it’s in the top two career paths (alongside technical AI safety) given on 80,000 Hours’ list of highest-impact career paths.[4] However, long-term AI governance is a pre-paradigmatic field at present, where it’s difficult for a junior/aspiring researcher to do research that’s directly impactful, and also difficult for them to do research which helps build skill for future impact.
(Note: I’ll abbreviate ‘strategy and governance’ to just ‘governance’ from now on, and I’ll drop the ‘long-term’ for concision, such that whenever I use ‘AI governance’ I’m referring to ‘long-term AI strategy and governance.’)
At this point, I’ll spell out my thoughts in sequence.
I’d like there to be more high-quality AI governance researchers in a few years time, because I think (largely through deferring to 80,000 Hours and Karnofsky, 2022, though with some amount of inside view) that AI governance is a hugely important cause—possibly in the top couple of cause areas.
Junior/aspiring researchers are having a hard time getting into AI governance.
It would be great if there were some way to help skill up more junior/aspiring researchers wanting to get into AI governance.
Oh wait, maybe they can do nuclear risk research now, and in doing so:
a) Have direct impact now (I’ve stated why I think this could be achieved in my ‘Low-hanging fruit’ section).
b) Skill up for doing AI governance research in the future.
Point 4b is where the action is at, and so I’ll expand on the point. Firstly, I should say that I am aware of motivated reasoning, privileging the hypothesis, and surprising and suspicious convergence, and I have thought critically about whether 4b arises out of these, and I think my point does really reflect my true belief here. And just to make clear again, this nuclear risk to AI governance pathway I’m proposing is what I think some fraction of aspiring AI governance researchers should consider, where one clear group I wouldn’t want to nudge into this fraction is those already at the level for obtaining positions in AI governance.[5]
The best exploration of aspiring effective altruist (EA)/x-risk researcher ‘goals’ for projects I’ve seen is ‘Goals we might have when taking actions to improve the EA-aligned research pipeline’ (Aird, 2021a). Below, as main bullets, is the list of goals from Aird’s post I consider most relevant to a CERI summer fellow, and as sub-bullets I expand on things relevant to my point 4b.[6]
(Meta note: I chose to create this list in this way—first picking all goals that seem most relevant to a CERI fellow, then commenting on them with respect to 4b, where I make both ‘positive (+)’ and ‘negative (‒)’ comments—in order to avoid cherry picking pieces of reasoning that support my 4b hypothesis.)
Building knowledge, skills, etc.
(+) Research skill development
Developing research skill requires doing research. And developing x-risk research skill is, all else equal, best developed by doing x-risk research. Therefore, for building research skill in AI governance, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
(+) Transferable domain knowledge
One who carries out nuclear risk research will develop knowledge in areas such as geopolitics, international relations, and how policy, governance and coordination work. Such knowledge is applicable to AI governance research.
(‒) But domain knowledge acquired in the course of doing nuclear risk research is not optimised for doing future AI governance research.
Network-building
(+) Connections with EA/x-risk-aligned researchers and decision makers come with doing EA/x-risk-aligned research. Additionally, doing nuclear risk research may result in some connections with non EA/x-risk policymakers.
(‒) But one’s network built through nuclear risk research is not an AI governance-optimized network.
Testing fit
(+) To test one’s fit for research, one needs to try one’s hand at research. And testing one’s fit for x-risk research is, all else equal, best done by trying one’s hand at x-risk research. Therefore, for testing one’s fit for AI governance research, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
(+) [Caveat: epistemic status here is less firm. Please comment if you believe my thinking here needs updating.] I think it may even be the case that, for aspiring/junior researchers wanting to test fit, doing nuclear risk research is better than doing AI governance research. Because there are research questions at the forefront of nuclear risk that an aspiring/junior researcher can work on now, where the path to impact and decision relevance of findings are fairly direct (see my ‘Low-hanging fruit’ section). Whereas, to my understanding, based on talking to around five people with some knowledge in the area, an aspiring/junior researcher tackling AI governance will be doing something that looks more like mapping the landscape, or assessing different possible directions for research, or doing ‘actual’ research but where the path to impact and decision relevance is fairly indirect.
Gaining credible signals of fit
(+) Perhaps the most credible signal of fit for research is strong prior research output. And, all else equal, perhaps the most credible signal of fit for x-risk research is strong prior x-risk research output. Therefore, for gaining credible signal of fit for AI governance research, doing x-risk research in nuclear risk is better than the counterfactual of not doing x-risk research in AI governance.
Career planning
(+) (I think this ties in with my first point on testing fit above.)
Direct impact
(+) (See my ‘Low-hanging fruit’ section.)
Looking through the above list, the two ‘holes’ in doing nuclear risk research to skill-build for high future impact in AI governance lie in AI governance-specific knowledge and network-building. These holes—especially the domain knowledge one—can be addressed, though, in the CERI SRF: in the nuclear stream, alongside the Nuclear Risk seminar programme,[7] fellows will have the option to participate in the AGI Safety Fundamentals: Governance seminar programme. (The context here is that in all CERI streams, fellows will be able to participate in the corresponding EA Cambridge cause area seminar programme, if they have not done so already.)
Nuclear risk: an overview
What is the problem?
Nuclear weapons that are armed at all times could kill tens of millions of people directly (e.g., Rodriguez, 2019a), and perhaps billions of people in the subsequent effects on climate and crop growth (Toon et al., 2019; Rodriguez, 2019b). The potential for extreme climate and crop effects in a so-called ‘nuclear winter,’ possibly leading to human extinction or irreversible societal change, is why nuclear risk is part of the x-risk landscape.
Nuclear weapons have been used twice before in warfare, by the US on Hiroshima and Nagasaki in World War II. The Cuban Missile Crisis, a tense military standoff between the US and the Soviet Union in 1962, is well known as a moment when the world came close to all out nuclear war. And there have been many ‘close calls’ over the past few decades, when nuclear weapons were nearly used either deliberately or accidentally. The annual risk of nuclear conflict is estimated to be around 1% (Aird, 2022), and Joan Rohlfing, President at the Nuclear Threat Initiative (NTI), thinks the annual risk of major nuclear conflict is around 0.5%: ‘I assign about a half percent risk per year to the potential for a global catastrophic nuclear event’ (2021).
Who else is working on the problem?
There are NGOs and think tanks working on ‘mainstream’ (i.e., not explicitly x-risk-focused) nuclear security, such as RAND, Global Zero, and NTI.[8] For a more complete—though still not exhaustive—list of these organisations, see the nuclear risk ‘view’ of ‘Database of orgs relevant to longtermist/x-risk work’ (Aird, 2021b). Their approaches differ from ours. In very broad strokes, mainstream nuclear security work focuses on preventing nuclear conflict from happening at all, whereas our focus in the x-risk community is on avoiding the worst nuclear conflicts—those that carry the tail risk of existential catastrophe—and on raising the chance that humanity will recover after a nuclear catastrophe. Please note that these are indeed broad strokes, and part of why we can draw something of a clear distinction is that the work going into nuclear risk reduction within the x-risk community has been neither large in quantity nor diverse in scope: nuclear x-risk has around 12 full-time equivalents, and around two-thirds of this total comes from one organisation: ALLFED (Aird & Aldred, 2022b).
Governments also give nuclear risk a lot of funding and attention. And some of this does reduce nuclear risk, for example through non-proliferation efforts, conflict-preventing diplomacy, successful arms control agreements, and perhaps even effective deterrence.[9] Nonetheless:
Much of that government funding and attention is focused on reducing risk to a particular nation or on advancing national security, as opposed to reducing global risk, let alone global catastrophic risk or x-risk.
Much of that government funding and attention may in fact increase global risk and/or x-risk (e.g., by development of new weapons).
How does CERI plan to help solve the problem?
Research Outlook
Our research outlook on nuclear risk draws substantially from Rethink Priorities’ work in this space, and ties in strongly with ‘Nuclear risk research ideas’ (Aird & Aldred, 2022a). It can be broadly divided into three categories:
Modelling the effects of specific nuclear conflicts that might occur between pairs of states, or groups of states. For instance, conflicts between:
India and Pakistan
China and the US
Exploring uncertainties that apply across a range of possible nuclear scenarios, examples being:
What are the ways in which a nuclear war could begin and then evolve?
How does the number of people dying from starvation vary with severity of crop yield decline?
How might future technologies, such as hypersonic missiles/glide vehicles and artificial intelligence, interact with nuclear risk?
Finding, cataloging, and evaluating different intervention options—such as public advocacy, international treaties, and improving key actors’ decision making—for reducing nuclear risk.
Theory of Change
[Note: There is a lot more I could write to better explain and flesh out the high-level paths below, such that interested readers and experts in the field are better able to understand, critique and (hopefully) help improve them. Accordingly, I aim to publish a sequel post in the near future.[10]]
Path 1 (smaller but more direct path)
The first path to impact for our nuclear risk research is through:
Influencing the decisions of actors, including grantmakers, with some connection to the effective altruism (EA)/x-risk community.
Influencing the decisions of actors who have no connection to the EA/x-risk community, such as policymakers, via EA/x-risk-aligned actors.
And to do these things well requires us to first produce high quality research that improves the state of knowledge on:
How far to prioritize nuclear risk relative to other x-risk cause areas.
When we say ‘how far to prioritize’, we’re talking mainly about influencing funding decisions and career decisions (and career advice given by us and our partner, Effective Altruism Cambridge).
What specific, concrete actions to take to mitigate nuclear risk.
Path 2 (larger but more indirect path)
Raising the skill levels of aspiring/junior x-risk researchers, connecting these aspiring researchers with senior researchers and possibly decision makers (in the EA/x-risk community, and also maybe in the wider nuclear security and policy communities), and therefore building capacity for more and better nuclear risk research (and AI governance research) in the future.
Acknowledgements
This post is a project of Cambridge Existential Risks Initiative. It was written by Will Aldred. Thanks to Rudolf Laine for helpful feedback. Thanks also to Michael Aird, with whom I’ve shared many conversations, and who has helped shape my views on research pipelines, nuclear risk, and AI governance, among other things. The idea for the section, ‘Skill-building for high future impact in AI governance,’ originated from one of these conversations, and the ‘Governments’ paragraph in ‘Who else is working on the problem?’ is essentially an excerpt from another of these conversations.
Caveat: highest impact according to 80,000 Hours in late 2021.
Please note that I discuss here some reasons for carrying out a nuclear risk summer research project, i.e., this list isn’t exhaustive, but it does include what have occurred to me as the most important reasons.
And who I shan’t name here.
Technical AI safety is ranked 1st, and long-term AI strategy and governance is ranked 2nd, according to the 80,000 Hours page on 23rd March, 2022. It is perhaps worth noting that 80k takes an importance, neglectedness, tractability (ITN)-based approach to assessing career path impact. Here are 80k’s caveats on their rankings: “We’ve ranked these paths roughly in terms of impact, assuming your personal fit for each is constant. But there is a lot of variation within each path — so the best opportunities in one lower on the list will often be better than most of the opportunities in a higher-ranked one.” (emphasis added)
I’m also aware that, of all possible ‘training grounds’ for long-term AI governance, nuclear risk might not be the best. Alternative training grounds include near-term/low-stakes AI issues, emerging tech policy, cybersecurity, international relations / national security, and maybe even climate change (because it involves hard coordination problems and large externalities). I thank Michael Aird for pointing out to me these other potential training grounds. An investigation of training grounds for AI governance is included here in Nuclear risk research ideas’ appendix.
Note that when I refer to ‘the counterfactual,’ the counterfactual I have in mind is an aspiring AI governance researcher who is not doing an AI governance summer research project (because these are few in number and very competitive). They’re instead doing either: a) A non x-risk related summer internship while reading up on AI governance in their spare time, or b) Lots of reading up on AI governance and some amount of independent research, though without a mentor, having successfully applied for funding to do this from, e.g., EA Funds’ Long-Term Future Fund or Open Philanthropy’s early-career funding for the long-term future.
Having spoken to a few people, these seem like the two most likely counterfactuals. Though the number of people who’d do (2.) seems much smaller than the number who’d do (1.) - I’m 80% confident the (1.):(2.) ratio would be at least 20:1.^ Thus making my counterfactual a weighted average of (1.) and (2.) seems unnecessary, and so (1.) is the counterfactual I talk about in my list in the main text. ^This was the rough consensus amongst the people I consulted. Reasons cited were:
Many of the people who’d consider (2.) if they were aware of its existence are in fact not aware of its existence, or don’t consider themselves as eligible/qualified to apply. (Which is sad.)
[Caveat: I’m less confident in the reasoning on this one.] We guess that many of those who apply to (2.) might be unsuccessful, because the fact that they didn’t land an AI governance summer research position at a research organization might be seen by grantmakers as not a great signal regarding funding them to do (2.).
Spending a summer doing (2.) scores negative social points with most non EAs, and (2.) is also not good for building a CV/resume.
This nuclear risk seminar programme does not yet exist, but I am working on having it ready by summer. I thank Michael Aird for his help in getting the curriculum started.
Note: the ‘mainstream’ nuclear security community tends to refer to NGOs and think tanks as ‘civil society’ (see, e.g., Holgate, n.d.).
See Beckstead (2015) and McIntyre (2016) for some estimates and discussion on this, as well as on the two points given below in the main text.
This future post on high-level goals in nuclear risk will be lead authored by Michael Aird. It’ll likely be included in Rethink Priorities’ Risks from Nuclear Weapons sequence, as well as the CERI SRF ’22 sequence as a sequel post to this one. (Note/Disclaimer: neither Michael nor Rethink Priorities are formally affiliated with CERI.) [Edited to add: this post, 8 possible high-level goals for work on nuclear risk, has now been published.]