I’m definitely not deeply familiar with any kind of “official EA thinking” on this topic (ie, I don’t know any EAs that specialize in nuclear security research / grantmaking / etc). But here are some things I just thought up, which might possibly be involved:
Neglectedness in the classic sense. Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War. (Here I am thinking not just about charitable foundations, but also DC think-tanks, university departments, and even the basic structure of the US military-industrial complex which naturally involves a lot of people trying to figure out what to do about nuclear weapons and war.)
Nuclear war might be slightly lower-ranked on the importance scale of a very committed and philosophically serious longtermist, since it seems harder for a nuclear war to literally kill everyone (wouldn’t New Zealand still make it? etc), than sufficiently super-intelligent AI or a sufficiently terrifying engineered bioweapon. So this places nuclear war risk somewhere on a spectrum between being a direct existential threat, vs being more of an “existential risk factor” (like climate change). Personally, I find it hard to bite that longtermist bullet all the way, emotionally. (ie, “The difference between killing 99% of people and 100% of people, is actually a bazillion times worse than the difference between killing 99% versus 0%”.) So I feel like nuclear war pretty much maxes out my personal, emotional “importance scale”. But other people might be be better than me at shutting up and multiplying! (And/or have higher odds than me that civilization would eventually be able to fully recover after a nuclear war.)
Tractability, in the sense that a lot of nuclear policy is decided by the US military-industrial complex (and people like the US president), in a way that seems pretty hard for the existing EA movement to influence? And then it gets even worse, because of course the OTHER half of the equation is being decided by the military-industrial complexes of Russia, China, India, etc—this seems even harder to influence! By contrast, AI safety is hugely influenceable by virtue of the fact that the top AI labs are right in the bay area and their researchers literally go to some of the same social events as bay-area EAs. Biosecurity seems like a middle-ground case, where on the downside there isn’t the crazy social overlap, but on the plus side it’s a partly academic field which is amenable to influence via charities, academic papers, hosting conferences, advocating for regulation, trying to spread good ideas via podcasts and blog posts, etc...
Tractability, in a different sense, namely that it’s pretty unclear exactly HOW to reduce the risk of a nuclear war, which interventions are helpful vs harmful, etc. For instance, lots of anti-nuclear activists advocate for reducing nuclear stockpiles (which certainly seems like would help reduce the severity of a worst-case nuclear war), but my impression is that many experts (both within EA and within more traditional bastions of nuclear security research) are very uncertain about the impact of unilaterally reducing our nuclear stockpiles—for example, maybe it would actually increase the damage caused by a nuclear war if we got rid of our land-based “nuclear sponge” ICBMs? Besides severity, what impact might reduced stockpiles have on the likelihood of nuclear war, if any? My impression is that these kinds of tricky questions are even more common in nuclear security than they are in the already troublesome fields of AI safety and biosecurity.
If I had to take a wild guess, I would say that my first Tractability point (as in, “I don’t know anybody who works at STRATCOM or the People’s Liberation Army Rocket Force”) is probably the biggest roadblock in an immediate sense. But maybe EA would have put more effort into building more influence here if we had prioritized nuclear risk more from the start—and perhaps that lack of historical emphasis is due to some mix of the other problems I mentioned?
This gets a lot of things right, but (knowing some of the EAs who did look into this or work on it now,) I would add a few:
1. Lindy effect and stability—we’re 70 years in, and haven’t had any post-first-use nuclear usage, so we expect it’s somewhat stable—not very stable, but the risk from newer technologies under this type of estimation is higher, because we have less of a record.
2. The current inside-view stability of the nuclear situation, where strong norms exist against use, and are being reinforced already by large actors, with deep pockets.
3. There seems to be a pretty robust expert consensus about the problem, and it concludes that there is little to be done other than on the margin.
Also, note that this was investigated as a cause area early on by Open Philanthropy, and then was looked at by Longview more recently. Both decided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.
This characterization seems pretty at odds to me with recent EA work, e.g. from Longview but also my colleague Christian Ruhl at FP, who tend to argue that the philanthropic space on nuclear risk is very funding-constrained and there are plenty of good funding margins left unfilled.
For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.
Thanks for the correction. I think you’re right, and have edited the last bit above to say:
Also, note that this was investigated as a cause area early on by Open Philanthropy, and then was looked at by Longview more recently. Both decided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.
Neglectedness in the classic sense. Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War.
I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That’s why I think overall cost-effectiveness of resilience is competitive with AGI safety.
I’m definitely not deeply familiar with any kind of “official EA thinking” on this topic (ie, I don’t know any EAs that specialize in nuclear security research / grantmaking / etc). But here are some things I just thought up, which might possibly be involved:
Neglectedness in the classic sense. Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War. (Here I am thinking not just about charitable foundations, but also DC think-tanks, university departments, and even the basic structure of the US military-industrial complex which naturally involves a lot of people trying to figure out what to do about nuclear weapons and war.)
Nuclear war might be slightly lower-ranked on the importance scale of a very committed and philosophically serious longtermist, since it seems harder for a nuclear war to literally kill everyone (wouldn’t New Zealand still make it? etc), than sufficiently super-intelligent AI or a sufficiently terrifying engineered bioweapon. So this places nuclear war risk somewhere on a spectrum between being a direct existential threat, vs being more of an “existential risk factor” (like climate change). Personally, I find it hard to bite that longtermist bullet all the way, emotionally. (ie, “The difference between killing 99% of people and 100% of people, is actually a bazillion times worse than the difference between killing 99% versus 0%”.) So I feel like nuclear war pretty much maxes out my personal, emotional “importance scale”. But other people might be be better than me at shutting up and multiplying! (And/or have higher odds than me that civilization would eventually be able to fully recover after a nuclear war.)
Tractability, in the sense that a lot of nuclear policy is decided by the US military-industrial complex (and people like the US president), in a way that seems pretty hard for the existing EA movement to influence? And then it gets even worse, because of course the OTHER half of the equation is being decided by the military-industrial complexes of Russia, China, India, etc—this seems even harder to influence! By contrast, AI safety is hugely influenceable by virtue of the fact that the top AI labs are right in the bay area and their researchers literally go to some of the same social events as bay-area EAs. Biosecurity seems like a middle-ground case, where on the downside there isn’t the crazy social overlap, but on the plus side it’s a partly academic field which is amenable to influence via charities, academic papers, hosting conferences, advocating for regulation, trying to spread good ideas via podcasts and blog posts, etc...
Tractability, in a different sense, namely that it’s pretty unclear exactly HOW to reduce the risk of a nuclear war, which interventions are helpful vs harmful, etc. For instance, lots of anti-nuclear activists advocate for reducing nuclear stockpiles (which certainly seems like would help reduce the severity of a worst-case nuclear war), but my impression is that many experts (both within EA and within more traditional bastions of nuclear security research) are very uncertain about the impact of unilaterally reducing our nuclear stockpiles—for example, maybe it would actually increase the damage caused by a nuclear war if we got rid of our land-based “nuclear sponge” ICBMs? Besides severity, what impact might reduced stockpiles have on the likelihood of nuclear war, if any? My impression is that these kinds of tricky questions are even more common in nuclear security than they are in the already troublesome fields of AI safety and biosecurity.
If I had to take a wild guess, I would say that my first Tractability point (as in, “I don’t know anybody who works at STRATCOM or the People’s Liberation Army Rocket Force”) is probably the biggest roadblock in an immediate sense. But maybe EA would have put more effort into building more influence here if we had prioritized nuclear risk more from the start—and perhaps that lack of historical emphasis is due to some mix of the other problems I mentioned?
This gets a lot of things right, but (knowing some of the EAs who did look into this or work on it now,) I would add a few:
1. Lindy effect and stability—we’re 70 years in, and haven’t had any post-first-use nuclear usage, so we expect it’s somewhat stable—not very stable, but the risk from newer technologies under this type of estimation is higher, because we have less of a record.
2. The current inside-view stability of the nuclear situation, where strong norms exist against use, and are being reinforced already by large actors, with deep pockets.
3. There seems to be a pretty robust expert consensus about the problem, and it concludes that there is little to be done other than on the margin.
Also, note that this was investigated as a cause area early on by Open Philanthropy, and
then was looked at byLongviewmore recently. Bothdecided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.This characterization seems pretty at odds to me with recent EA work, e.g. from Longview but also my colleague Christian Ruhl at FP, who tend to argue that the philanthropic space on nuclear risk is very funding-constrained and there are plenty of good funding margins left unfilled.
For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.
Thanks for the correction. I think you’re right, and have edited the last bit above to say:
Also, note that this was investigated as a cause area early on by Open Philanthropy, and
then was looked at byLongviewmore recently. Bothdecided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That’s why I think overall cost-effectiveness of resilience is competitive with AGI safety.