Thanks for going through the “premises” and leaving your comments on each—very helpful for myself to further clarify and reflect upon my thoughts!
On P1 (that nuclear escalation is the main or only path to existential catastrophe):
Yes, I do argue for the larger claim that a one-time deployment of nuclear weapons could be the start of a development that ends in existential catastrophe even if there is no nuclear escalation.
I give a partial justification of that in the post and in my comment to Aron,
but I accept that it’s not completely illegitimate for people to continue to disagree with me; opinions on a question like this rest on quite foundational beliefs, intuitions, and heuristics, and two reasonable people can, imo, have different sets of these.
(Would love to get into a more in-depth conversation on this question at some point though, so I’d suggest putting it on the agenda for the next time we happen to see each other in-person :)!)
On P2:
Your suggested reformulation (“preventing the first nuclear deployment is more tractable because preventing escalation has more unknowns”) is pretty much in line with what I meant this premise/proposition to say in the context of my overall argument. So, on a high-level, this doesn’t seem like a crux that would lead the two of us to take a differing stance on my overall conclusion.
You’re right that I’m not very enthusiastic about the idea of putting actual probabilities on any of the event categories I mention in the post (event categories: possible consequences of a one-time deployment of nukes; conceivable effects of different types of interventions). We’re not even close to sure that we/I have succeeded in identifying the range of possible consequences (pathways to existential catastrophe) and effects (of interventions), and those consequences and effects that I did identify aren’t very specific or well-defined; both of these seem like
necessaryprudent steps to precede the assignment of probabilities. I realize while writing that you will probably just once again disagree with that leap I made (from deep uncertainty to rejecting probability assignment), and that I’m not doing much to advance our discussion here. Apologies! On your specific points: correct, I don’t think we can advance much beyond an intuitive, extremely uncertain assignment of probabilities; I think that the alternative (whose existence you deny) is to acknowledge our lack of reasonable certainty about these probabilities and to make decisions in the awareness that there are these unknowns (in our model of the world); and I (unsurprisingly) disagree that institutions or people that choose this alternative will do systematically worse than those that always assign probabilities.(I don’t think the start-up analogy is a good one in this context, since venture capitalists get to make many bets and they receive reliable and repeated feedback on their bets. Neither of these seem particularly true in the nuclear risk field (whether we’re talking about assigning probabilities to the consequences of nuclear weapons deployment or about the effects of interventions to reduce escalation risk / prepare for a post-nuclear war world).)
On P3: Thanks for flagging that, even after reading my post, you feel ill-equipped to assess my claim regarding the value of interventions for preventing first-use vs. interventions for preventing further escalation. Enabling readers to navigate, understand and form an opinion on claims like that one was one of the core goals that I started this summer’s research fellowship with; I shall reflect on whether this post could have been different, or whether there could have been a complementary post, to better achieve this enabling function!
On P4: Haha yes, I see this now, thanks for pointing it out! I’m wondering whether renaming them “propositions” or “claims” would be more appropriate?
Ah, I think mybe there is/was a misunderstanding here. I don’t reject the claim that the forecasters are (much) better on average when using probabilities than when refusing to do so. I think my point here is that the questions we’re talking about (what would be the full set of important* consequences of nuclear first-use or the full set of important* consequences of nuclear risk reduction interventions X and Z) are not your standard, well-defined and soon-to-be-resolved forecasting questions. So in a sense, the very fact that the questions at issue cannot be part of a forecasting experiment is one of the main reasons for why I think they are so deeply uncertain and hard to answer with more than intuitive guesswork (if they could be part of a forecasting experiment, people could test and train their skills at assigning probabilities by answering many such questions, in which case I guess I would be more amenable to the claim that assigning probabilities can be useful). The way I understood our disagreement, it was not about the predictive performance of actors who do vs don’t (always) use probabilities, but rather about their decision quality. I think the actual disagreement may be that I think that there is a significant difference (for some decisions, high decision quality is not a neat function of explicit predictive ability), whereas you might be close to equating the two?
[*by “full set” I mean that this is supposed to include indirect/second-order consequences]
That said, I can’t, unfortunately, think of any alternative ways to resolve the disagreement regarding the decision quality of people using vs. refusing to use probabilities in situations where assessing the effects of a decision/action after the fact is highly difficult… (While the comment added by Noah Scales contains some interesting ideas, I don’t think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)
One other thing, because I forgot about that in my last response:
“FInally, I am not that sure of your internal history but one worry would be if you decided long ago intuitively based on the cultural milieu that the right answer is ‘the best intervention in nuclear policy is to try to prevent first use’ and then subconsciously sought out supporting arguments. I am not saying this is what happened or that you are any more guilty of this than me or anyone else, just that it is something I and we all should be wary of.”
-> I think this is a super important point, actually, and agree that it’s a concern that should be kept in mind when reading my essay on this topic. I did have the intuitive aversion against focusing on tail end risks before I came up with all the supporting arguments; basically, this post came about as a result of me asking myself “Why do I think it’s such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?” I added a footnote to be more transparent about this towards the beginning of the post (fn. 2). Thanks for raising it!