This post is part of a series of rough posts on nuclear risk research ideas. I strongly recommend that, before you read this post, you read the series’ summary & introduction post for context, caveats, and to see the list of other ideas. One caveat that’s especially worth flagging here is that I drafted this in late 2021 and haven’t updated it much since. I’m grateful to Will Aldred for help with this series.
One reason I’m publishing this now is to serve as one menu of research project ideas for upcoming summer research fellowships.
Some tentative bottom-line views about this project idea
Importance
Tractability
Neglectedness
Outsourceability
Medium/Low
Medium/High
Medium/Low
Low
What is this idea? How could it be tackled?
Most interest in nuclear risk reduction from members of the EA community is premised on longtermism. But one could also argue for prioritising nuclear risk reduction for its near-term effects alone, meaning something like effects on the current and/or next one or two generations. In fact, those near-term effects seem to me to be the main motivator for work on nuclear risk by people outside the EA community.
But perhaps those non-EAs are overestimating the risk, underestimating the difficulty of reducing the risk, overlooking other interventions with strong neartermist cost-effectiveness (such as donating to charities recommended by GiveWell or Animal Charity Evaluators), or in some other way deviating from effective and broadly utilitarian prioritisation? This could be investigated by conducting a cost-effectiveness analysis of nuclear risk reduction that focuses solely on near-term effects.
In conducting this project, one would have to consider:
What nuclear risk intervention, “intermediate goal”, or collection of them should be focused on?
What sorts of near-term effects should be focused on?
For example, just effects on humans or also on other animals?
Just deaths averted, or also one or more of non-fatal health effects, effects on wellbeing, effects on culture, etc.?
What time horizon should be chosen, both from now and from when the nuclear conflict occurs?
What comparison point should be focused on?
Perhaps a charity recommended by GiveWell or ACE?
How detailed and careful should the analysis be?
For example, a quick BOTEC with vague choices about what interventions and effects are being focused on, or a more carefully constructed model with estimates based on actual research?
There are several possible paths to impact for this project. First, it could cause members of the EA community to correctly update towards prioritising nuclear risk more (with their careers, funding, etc.). This could occur if this project suggests the (human-centric) neartermist case for nuclear risk reduction is surprisingly strong or if it simply provides a more convincing demonstration that the case is at least plausible (which I actually already believe). This could affect the behaviour either of people who lean neartermist or people who lean longtermist but not strongly.
Second, this project could cause members of the EA community to correctly update towards prioritising nuclear risk less, if it turns out the neartermist case is surprisingly weak.
Third and fourth, this project could cause people who aren’t members of the EA community to correctly update towards prioritising nuclear risk more or less (depending on what the project reveals).
However, I think there are several barriers to these paths to impact. In particular:
I think it’s very unlikely nuclear risk would seem like a top priority from a strongly animal-inclusive neartermist perspective, limiting the number of people whose decisions this project seems decently likely to be relevant to.
Many EA career, funding, etc. decisions are made based on a longtermist perspective anyway.
In practice, neartermism tends to involve things like a preference for data-driven reasoning and an aversion to arguments that rely fairly strongly on speculation or on low probabilities of high-stakes outcomes. So this project’s results may simply not seem very convincing or relevant to many neartermists. (Though this barrier could be somewhat overcome if a fairly extensive version of this project is done, involving multiple somewhat independent models that draw on disparate sources of empirical data wherever possible.)
Most relevant people who aren’t in the EA community seem somewhat unlikely to base big decisions on this sort of cost-effectiveness analysis.
What sort of person might be a good fit for this?
I expect any good generalist researcher could conduct a useful version of this project. I expect someone to be a stronger fit the more they already know about various things relevant to nuclear risk (since this project would involve something like end-to-end modelling, even if quite roughly done) and the more experience they have with modelling, forecasting, literature reviews, and expert elicitation.
Research project idea: Neartermist cost-effectiveness analysis of nuclear risk reduction
This post is part of a series of rough posts on nuclear risk research ideas. I strongly recommend that, before you read this post, you read the series’ summary & introduction post for context, caveats, and to see the list of other ideas. One caveat that’s especially worth flagging here is that I drafted this in late 2021 and haven’t updated it much since. I’m grateful to Will Aldred for help with this series.
One reason I’m publishing this now is to serve as one menu of research project ideas for upcoming summer research fellowships.
Some tentative bottom-line views about this project idea
What is this idea? How could it be tackled?
Most interest in nuclear risk reduction from members of the EA community is premised on longtermism. But one could also argue for prioritising nuclear risk reduction for its near-term effects alone, meaning something like effects on the current and/or next one or two generations. In fact, those near-term effects seem to me to be the main motivator for work on nuclear risk by people outside the EA community.
But perhaps those non-EAs are overestimating the risk, underestimating the difficulty of reducing the risk, overlooking other interventions with strong neartermist cost-effectiveness (such as donating to charities recommended by GiveWell or Animal Charity Evaluators), or in some other way deviating from effective and broadly utilitarian prioritisation? This could be investigated by conducting a cost-effectiveness analysis of nuclear risk reduction that focuses solely on near-term effects.
In conducting this project, one would have to consider:
What nuclear risk intervention, “intermediate goal”, or collection of them should be focused on?
See also Aird & Aldred (2022).
What sorts of near-term effects should be focused on?
For example, just effects on humans or also on other animals?
Just deaths averted, or also one or more of non-fatal health effects, effects on wellbeing, effects on culture, etc.?
What time horizon should be chosen, both from now and from when the nuclear conflict occurs?
What comparison point should be focused on?
Perhaps a charity recommended by GiveWell or ACE?
How detailed and careful should the analysis be?
For example, a quick BOTEC with vague choices about what interventions and effects are being focused on, or a more carefully constructed model with estimates based on actual research?
This project could overlap with “Impact assessment of various organizations, programmes, movements, etc.”.
Why might this research be useful?
There are several possible paths to impact for this project. First, it could cause members of the EA community to correctly update towards prioritising nuclear risk more (with their careers, funding, etc.). This could occur if this project suggests the (human-centric) neartermist case for nuclear risk reduction is surprisingly strong or if it simply provides a more convincing demonstration that the case is at least plausible (which I actually already believe). This could affect the behaviour either of people who lean neartermist or people who lean longtermist but not strongly.
Second, this project could cause members of the EA community to correctly update towards prioritising nuclear risk less, if it turns out the neartermist case is surprisingly weak.
Third and fourth, this project could cause people who aren’t members of the EA community to correctly update towards prioritising nuclear risk more or less (depending on what the project reveals).
However, I think there are several barriers to these paths to impact. In particular:
I think it’s very unlikely nuclear risk would seem like a top priority from a strongly animal-inclusive neartermist perspective, limiting the number of people whose decisions this project seems decently likely to be relevant to.
Many EA career, funding, etc. decisions are made based on a longtermist perspective anyway.
In practice, neartermism tends to involve things like a preference for data-driven reasoning and an aversion to arguments that rely fairly strongly on speculation or on low probabilities of high-stakes outcomes. So this project’s results may simply not seem very convincing or relevant to many neartermists. (Though this barrier could be somewhat overcome if a fairly extensive version of this project is done, involving multiple somewhat independent models that draw on disparate sources of empirical data wherever possible.)
Most relevant people who aren’t in the EA community seem somewhat unlikely to base big decisions on this sort of cost-effectiveness analysis.
What sort of person might be a good fit for this?
I expect any good generalist researcher could conduct a useful version of this project. I expect someone to be a stronger fit the more they already know about various things relevant to nuclear risk (since this project would involve something like end-to-end modelling, even if quite roughly done) and the more experience they have with modelling, forecasting, literature reviews, and expert elicitation.
Some relevant previous work
Carl Shulman on the common-sense case for existential risk work and its practical implications
Many of ALLFED’s papers
Lewis’s (2018) “The person-affecting value of existential risk reduction”
Some things linked to from Ethics of existential risk