I’m Jeffrey Ladish. I’m a security researcher and risk consultant focused on global catastrophic threats. My website is at https://jeffreyladish.com
Jeffrey Ladish
EA Hangout Prisoners’ Dilemma
Retrospective on Catalyst, a 100-person biosecurity summit
I see you define it a few paragraphs down, but at the top would be helpful I think
Could you define ESG investing at the begining of your post?
Yeah, I would agree with that! I think radiological weapons are some of the most relevant nuclear capabilities / risks to consider from a longterm perspective, due to their risk of being developed in the future.
The part I added was:
”By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly.”
I also think indirect extinction from nuclear war is unlikely, but I would like to address this more in a future post. I disagree that additional clarifications are needed. I think people made these points clearly in the comments, and that anyone motivated to investigate this area seriously can read those. If you want to try to doublecrux on why we disagree here I’d be up for that, though on a call might be preferable for saving time.
Thanks for this perspective!
Strong agree!
I mean that the amount required to cover every part of the Earth’s surface would serve no military purpose. Or rather, it might enhance one’s deterrent a little bit, but it would
1) kill all of one’s own people, which is the opposite of a defense objective
2) not be a very cost effective way to improve one’s deterrent. In nearly all cases it would make more sense to expand second strike capabilities by adding more submarines, mobile missile launchers, or other stealth second strike weapons.
Which isn’t to say this couldn’t happen! Military research teams have proposed crazy plans like this before. I’m just arguing, as have many others at RAND and elsewhere, that a doomsday machine isn’t a good deterrent, compared to the other options that exist (and given the extraordinary downside risks).
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I’m trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I’d be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future.
Thanks, fixed!
Thanks, fixed!
This may be in the Brookings estimate, which I haven’t read yet, but I wonder how much cost disease + reduction in nuclear force has affected the cost per warhead / missile. My understanding is that many military weapon systems get much more expensive over time for reasons I don’t well understand.
Warheads could be altered to increase the duration of radiation effects from fallout, but this would would also reduce their yield, and would represent a pretty large change in strategy. We’ve gone 70 years without such weapons, which the recent Russian submersible system as a possible exception. It seems unlikely such a shift in strategy will occur in the next 70 years, but like 3% unlikely rather than really unlikely.
It’s a good point that risks of extinction could get significantly worse if different/more nuclear weapons were built & deployed, and combined with other WMDs. And the existence of 70k+ weapons in the cold war presents a decent outside view argument that we might see that many in the future. I’ll edit the post to clarify that I mean present and not future risks from nuclear war.
I think I gave the impression that I’m making a more expansive claim than I actually mean to make, and will edit the post to clarify this. The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who’ve examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summary of this argument. I think that’s just the beginning in assessing existential risk from nuclear war, and I really wouldn’t want people to read my post and walk away thinking “nuclear war is nothing to worry about from a longtermist perspective.”
I agree that “We know that one type of existential risk from nuclear war is very small, but we don’t really have a good idea for how large total existential risk from nuclear war”. I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction. I currently think humans would retain the most significant basic survival technologies following a collapse and then reacquire lost technological capacities relatively quickly. (I discussed this investigation here though not in depth). I’m planning too write this up as part of my compounding risks post or as a separate one.
Agreed that it’s very hard to know the sign on a huge history-altering event, whether it’s a nuclear war or covid.
Nuclear war is unlikely to cause human extinction
Update on civilizational collapse research
Some quick answers to your questions based on my current beliefs:
Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
I think the answer in the short term is no, if “completely collapses” means something like “is unable to get back to at least 1950′s level technology in 500 years”. I think think there are a number of things that could reduce humanity’s “technological carrying capacity”. I’m currently working on explicating some of these factors, but some examples would be drastic climate change, long-lived radionuclides, increase in persistent pathogens.
Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
I think we can. I’m not sure we can get very confident about exactly which potential bottlenecks will prove most significant, but I think we can narrow the search space and put forth some good hypotheses, both by reasoning from the best reference class examples we have and by thinking through the economics of potential scenarios.
Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it’s quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
I’m not sure about this one. I can think of some scenarios that would wipe out 90%+ of the population but none of them seem very likely. Engineered pandemics seem like one candidate (I agree with Denkenberger here), and the worst-case nuclear winter scenarios might also do it, though I haven’t read the nuclear winter papers in a while, and there has been several new papers and comments in the last year, including real disagreement in the field (yay, finally!)
Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
Population seems like one important variable in our technological carrying capacity, but I expect some of the others are as important. The one I mentioned in my other post is basically I think a huge one is state planning & coordination capacity. I think post-WWII Germany and Japan illustrate this quite well. However, I don’t have a very good sense of what might cause most states to fail without also destroying a large part of the population at the same time. But what I’m saying is that the population factor might not be the most important one in those scenarios.
Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)
I’m very uncertain about this. I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization. The gist is that there is far less competition for the former interventions. Of course, given the huge uncertainties about both the circumstances of global catastrophes and the potential intervention points, it’s hard to say whether it would possible to actually alter the post-disaster civilization’s profile at all. However, it’s also hard to say whether we can alter the current civilization’s profile at all, and it’s not obvious to me that this latter task is easier.
I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.
My conclusion after investigating potential collapse scenarios:
1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a “civilizational collapse”, where an unprecedented number of people die and key technologies are (temporarily) lost.
2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years.
3) The highest leverage point for intervention in a potential post-collapse environment would be at the state level. Individuals, even wealthy individuals, lack the infrastructure and human resources at the scale necessary to rebuild effectively. There are some decent mitigations possible in the space of information archival, such as seed banks and internet archives, but these are far less likely to have long term impacts compared to state efforts.
Based on these conclusions, I decided to focus my efforts on other global risk analysis areas, because I felt I didn’t have the relevant skills or resources to embark on a state-level project. If I did have those skills & resources, I believe (low to medium confidence) it would be worthwhile project, and if I found a person or group who did possess those skills / resources, I would strongly consider offering my assistance.
I do know of a project here that is pretty promising, related to improving secure communication between nuclear weapons states. If you know people with significant expertise who might be interested pm me.
Thanks!