I’m dissatisfied with my explanation of why there is not more attention from EAs and EA funders on nuclear safety and security, especially relative to e.g. AI safety and biosecurity. This has come up a lot recently, especially after the release of Oppenheimer. I’m worried I’m not capturing the current state of affairs accurately and consequently not facilitating fully contextualized dialogue.
What is your best short explanation?
(To be clear, I know many EAs and EA funders are working on nuclear safety and security, so this is more so a question of resource allocation, rather than inclusion in the broader EA cause portfolio.)
I’m definitely not deeply familiar with any kind of “official EA thinking” on this topic (ie, I don’t know any EAs that specialize in nuclear security research / grantmaking / etc). But here are some things I just thought up, which might possibly be involved:
Neglectedness in the classic sense. Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War. (Here I am thinking not just about charitable foundations, but also DC think-tanks, university departments, and even the basic structure of the US military-industrial complex which naturally involves a lot of people trying to figure out what to do about nuclear weapons and war.)
Nuclear war might be slightly lower-ranked on the importance scale of a very committed and philosophically serious longtermist, since it seems harder for a nuclear war to literally kill everyone (wouldn’t New Zealand still make it? etc), than sufficiently super-intelligent AI or a sufficiently terrifying engineered bioweapon. So this places nuclear war risk somewhere on a spectrum between being a direct existential threat, vs being more of an “existential risk factor” (like climate change). Personally, I find it hard to bite that longtermist bullet all the way, emotionally. (ie, “The difference between killing 99% of people and 100% of people, is actually a bazillion times worse than the difference between killing 99% versus 0%”.) So I feel like nuclear war pretty much maxes out my personal, emotional “importance scale”. But other people might be be better than me at shutting up and multiplying! (And/or have higher odds than me that civilization would eventually be able to fully recover after a nuclear war.)
Tractability, in the sense that a lot of nuclear policy is decided by the US military-industrial complex (and people like the US president), in a way that seems pretty hard for the existing EA movement to influence? And then it gets even worse, because of course the OTHER half of the equation is being decided by the military-industrial complexes of Russia, China, India, etc—this seems even harder to influence! By contrast, AI safety is hugely influenceable by virtue of the fact that the top AI labs are right in the bay area and their researchers literally go to some of the same social events as bay-area EAs. Biosecurity seems like a middle-ground case, where on the downside there isn’t the crazy social overlap, but on the plus side it’s a partly academic field which is amenable to influence via charities, academic papers, hosting conferences, advocating for regulation, trying to spread good ideas via podcasts and blog posts, etc...
Tractability, in a different sense, namely that it’s pretty unclear exactly HOW to reduce the risk of a nuclear war, which interventions are helpful vs harmful, etc. For instance, lots of anti-nuclear activists advocate for reducing nuclear stockpiles (which certainly seems like would help reduce the severity of a worst-case nuclear war), but my impression is that many experts (both within EA and within more traditional bastions of nuclear security research) are very uncertain about the impact of unilaterally reducing our nuclear stockpiles—for example, maybe it would actually increase the damage caused by a nuclear war if we got rid of our land-based “nuclear sponge” ICBMs? Besides severity, what impact might reduced stockpiles have on the likelihood of nuclear war, if any? My impression is that these kinds of tricky questions are even more common in nuclear security than they are in the already troublesome fields of AI safety and biosecurity.
If I had to take a wild guess, I would say that my first Tractability point (as in, “I don’t know anybody who works at STRATCOM or the People’s Liberation Army Rocket Force”) is probably the biggest roadblock in an immediate sense. But maybe EA would have put more effort into building more influence here if we had prioritized nuclear risk more from the start—and perhaps that lack of historical emphasis is due to some mix of the other problems I mentioned?
This gets a lot of things right, but (knowing some of the EAs who did look into this or work on it now,) I would add a few:
1. Lindy effect and stability—we’re 70 years in, and haven’t had any post-first-use nuclear usage, so we expect it’s somewhat stable—not very stable, but the risk from newer technologies under this type of estimation is higher, because we have less of a record.
2. The current inside-view stability of the nuclear situation, where strong norms exist against use, and are being reinforced already by large actors, with deep pockets.
3. There seems to be a pretty robust expert consensus about the problem, and it concludes that there is little to be done other than on the margin.
Also, note that this was investigated as a cause area early on by Open Philanthropy, and
then was looked at byLongviewmore recently. Bothdecided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.This characterization seems pretty at odds to me with recent EA work, e.g. from Longview but also my colleague Christian Ruhl at FP, who tend to argue that the philanthropic space on nuclear risk is very funding-constrained and there are plenty of good funding margins left unfilled.
For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.
Thanks for the correction. I think you’re right, and have edited the last bit above to say:
Also, note that this was investigated as a cause area early on by Open Philanthropy, and
then was looked at byLongviewmore recently. Bothdecided to have it as a small focus, rather than a key area. Edit (to correct a mistake): It was looked at by Longview more recently, and they have highlighted the topic significantly more, especially in the wake of other funders withdrawing support.I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That’s why I think overall cost-effectiveness of resilience is competitive with AGI safety.
It seems very unlikely that a nuclear war will kill all of us, compared to biorisk where this seems more possible.
Not sure this should affect funding in general, but explicitly longtermist funders will therefore weight biorisk more.
I’ll point out that I’m skeptical that most biorisks could be existential risks—I think they are plausible global catastrophic risks, and overlapping risks could lead to extinction, but I think people’s idea of a disease that kills over 99.9% of the global population is very unlikely even when accounting for near-term potential bioweapons.
This does help answer the question, but it conflates extinction risk with existential risk, which is I think a big mistake in general. This chapter in How Worlds Collapse does a nice job of explaining this:
To be clear, I think there are other good reasons to weight biorisk more (as I do).
Okay but I just think that’s not that common a view. If you leave 1,000 − 10,000 humans alive, the longterm future is probably fine. So that’s the existential risk reduction down by 60 − 90%
This is a very common claim that I think needs to be defended somewhat more robustly instead of simply assumed. If we have one strength as a community, is in not simply assuming things.
My read is that the evidence here is quite limited, the outside view suggests that losing 99.9999% of a species / having a very small population is a significant extinction risk, and that the uncertainty around the long-term viability of collapse scenarios is enough reason to want to avoid near-extinction events.
Why do I think 1,000 −10,000 humans is probably (60 − 90%) fine?
According to Luisa Rodriguez, you need about 300 people to rebuild the human race.
These people seem likely to be very incentivised towards survival—humans generally like surviving. It would be awful for them, sure, but the question is would they rebuild us as a species? And I think the answer is probably.
And let’s remember that this is the absolute worst case scenario. The human race has twice dropped nuclear bombs and then never again. It seems a big leap to imagine that not only will we do so but we will wipe ourselves out to the extent of only 1 such group.
Every successive group that could rebuild the human race is extra. I imagine that actually 100s of millions would survive an actual worldwide nuclear war, so the point we are litigating is a very small chance anwyay.
I don’t really know what base rates I’d use here. Feels like you want natural disasters rather than predation. When the meteor hit do we know how population size affected repopulation? Even then, humans are just way more competent than any other animals. So as I said originally we might be looking at a 10 − 40% chance given the near worst case scenario, but I don’t buy your outside view.
I’d be curious what others outside views are here and if anyone has actual base rates on disaster driven animal populations and repopulation.
As an aside,
I disagree. I’ve said what I think, you can push back on it if you want, but why is it bad to “simply assume” my view rather than yours?
My point is precisely that you should not assume any view. My position is that the uncertainties here are significant enough to warrant some attention to nuclear war as a potential extinction risk, rather than to simply bat away these concerns on first principles and questionable empirics.
Where extinction risk is concerned, it is potentially very costly to conclude on little evidence that something is not an extinction risk. We do need to prioritize, so I would not for instance propose treating bad zoning laws as an X-risk simply because we can’t demonstrate conclusively that they won’t lead to extinction. Luckily there are very few things that could kill very large numbers of people, and nuclear war is one of them.
I don’t think my argument says anything about how nuclear risk should be prioritized relative to other X-risks, I think the arguments for deprioritizing it relative to others are strong and reasonable people can disagree; YMMV.
My argument does say something about how nuclear risk shoud be prioritised. It is a lower priority if both existed. Maybe much lower.
The complicated thing is that nuclear risks do exist whereas biorisk and AI risk are much more speculative in terms of actually existing. In this sense I can believe nuclear should be funded more.
I think your arguments do suggest good reasons why nuclear risk might be prioritized lower; since we operate on the most effective margin, as you note, it is also possible at the same time for there to be significant funding margins in nuclear that are highly effective in expectation.
Do you work on researching nuclear risk?
How do you think this disagreement could be more usefully delineated. It seems like there is some interesting disagreement here?
I’m not Matt, but I do work on nuclear risk. If we went down to 1000 to 10,000 people, recovery would take a long time, so there is significant chance of supervolcanic eruption or asteroid/comet impact causing extinction. People note that agriculture/cities developed independently, indicating it is high probability. However, it only happened when we had a stable moderate climate, which might not recur. Furthermore, the Industrial Revolution only happened once, so there is less confidence that it would happen again. In addition, it would be more difficult with depleted fossil fuels, phosphorus, etc. Even if we did recover industry, I think our current values are better than randomly chosen values (e.g. slavery might continue longer or democracy be less prevalent).
This feels too confident. A nuclear war into a supervolcano is just really unlikely. Plus if there were 1000 people then there would be so much human canned goods left over—just go to a major city and sit in a supermarket.
If a major city can support a million people for 3 days on its reserves it can support a 1000 people for 30 years.
Again, I’m not saying that I think it doesn’t matter, but I think my answers are good reasons why it’s less than AI
A nuclear war happening at the same time as a supervolcano is very unlikely. However, it could take a hundred thousand years to recover population, so if the frequency of supervolcanic eruptions is roughly every 30,000 years, it’s quite likely there would be one before we recover.
The scenario I’m talking about is one where the worsening climate and loss of technology means they would not be enough food, so the stored food would be consumed quickly. Furthermore, edible wild species including fish may be eaten to extinction.
I agree that more total money should be spent on AGI safety than nuclear issues. However, resilience to sunlight reduction is much more neglected than AGI safety. That’s why the Monte Carlo analyses found that the cost-effectiveness of resilience to loss of electricity (e.g. high-altitude detonations of nuclear weapons causing electromagnetic pulses) and resilience to nuclear winter are competitive with AGI safety.
I agree with you, it is disappointing that EA are doing little in this area.
In Australia, we have a speaker from ICAN (the nobel prize winning anti-nuclear weapons NGO) attending the 2023 EAGX Australia in Melbourne. In my opinion, it’s a particularly promising area for big impact (and especially for Aussie EAs) due to the recently developed AUKUS alliance. The details of the alliance are still being fleshed out, and a big opportunity exists to shape the alliancce to de-risk the chances of a conflict between great powers.