The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically.
This agrees with my impression, and I do think it’s valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.
It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity’s potential / eventual extinction.
I’m not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn’t be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn’t immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it’s sufficiently non-obvious that it would be costly if people walked away with the impression that it’s definitely not a problem.
I’m planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.
This sounds like a very valuable topic, and I’m excited to see more work on it.
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I’m trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I’d be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future.
This agrees with my impression, and I do think it’s valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.
I’m not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn’t be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn’t immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it’s sufficiently non-obvious that it would be costly if people walked away with the impression that it’s definitely not a problem.
This sounds like a very valuable topic, and I’m excited to see more work on it.
FWIW, my guess is that you’re already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it’d be good to describe in detail “here is how this combination of different hazards could kill everyone”]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I’d be happy to review a post prior to publication, though I’m not sure if I’m particularly qualified.)
Yes, I was planning to get review prior to publishing this. In general when it comes to risks from biotechnology, I’m trying to follow the principles we developed here: https://www.lesswrong.com/posts/ygFc4caQ6Nws62dSW/bioinfohazards I’d be excited to see, or help workshop, better guidance for navigating information hazards in this space in the future.