Ah, I think mybe there is/was a misunderstanding here. I don’t reject the claim that the forecasters are (much) better on average when using probabilities than when refusing to do so. I think my point here is that the questions we’re talking about (what would be the full set of important* consequences of nuclear first-use or the full set of important* consequences of nuclear risk reduction interventions X and Z) are not your standard, well-defined and soon-to-be-resolved forecasting questions. So in a sense, the very fact that the questions at issue cannot be part of a forecasting experiment is one of the main reasons for why I think they are so deeply uncertain and hard to answer with more than intuitive guesswork (if they could be part of a forecasting experiment, people could test and train their skills at assigning probabilities by answering many such questions, in which case I guess I would be more amenable to the claim that assigning probabilities can be useful). The way I understood our disagreement, it was not about the predictive performance of actors who do vs don’t (always) use probabilities, but rather about their decision quality. I think the actual disagreement may be that I think that there is a significant difference (for some decisions, high decision quality is not a neat function of explicit predictive ability), whereas you might be close to equating the two?
[*by “full set” I mean that this is supposed to include indirect/second-order consequences]
That said, I can’t, unfortunately, think of any alternative ways to resolve the disagreement regarding the decision quality of people using vs. refusing to use probabilities in situations where assessing the effects of a decision/action after the fact is highly difficult… (While the comment added by Noah Scales contains some interesting ideas, I don’t think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)
One other thing, because I forgot about that in my last response:
“FInally, I am not that sure of your internal history but one worry would be if you decided long ago intuitively based on the cultural milieu that the right answer is ‘the best intervention in nuclear policy is to try to prevent first use’ and then subconsciously sought out supporting arguments. I am not saying this is what happened or that you are any more guilty of this than me or anyone else, just that it is something I and we all should be wary of.”
-> I think this is a super important point, actually, and agree that it’s a concern that should be kept in mind when reading my essay on this topic. I did have the intuitive aversion against focusing on tail end risks before I came up with all the supporting arguments; basically, this post came about as a result of me asking myself “Why do I think it’s such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?” I added a footnote to be more transparent about this towards the beginning of the post (fn. 2). Thanks for raising it!
(While the comment added by Noah Scales contains some interesting ideas, I don’t think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)
Yes, that’s right that my suggestions let you assess predictive success, in some cases, for example, over a set of futures partitioning a space of possibilities. Since the futures partition the space, one of them will occur, the rest will not. A yes/no forecast works this way.
Actually, if you have any question about a future at a specific time about which you feel uncertain, you can phrase it as a yes/no question. You then partition the space of possibilities at that future time. Now you can answer the question, and test your predictive success. Whether your answer has any value is the concern.
However, one option I mentioned is to list contingencies that, if present, result in contingent situations (futures). That is not the same as predicting the future, since the contingencies don’t have to be present or identified (EDIT: in the real world, ie as facts), and you do not expect their contingent futures otherwise.
If condition X, then important future Y happens.
Condition X could be present now or later, but I can’t identify or infer its presence now.
Deep uncertainty is usually taken as responding to those contingent situations as meaningful anyway. As someone without predictive information, you can only start offering models, like:
If X, then Y
If Y, then Z
If W and G, then B
If B, then C
A
T
I’m worried that A because …
You can talk about scenarios, but you don’t know or haven’t seen their predictive indicators.
You can discuss contingent situations, but you can’t claim that they will occur.
You can still work to prevent those contingent situations, and that seems to be your intention in your area of research. For example, you can work to prevent current condition “A”, whatever that is. Nuclear proliferation, maybe, or deployment of battlefield nukes. Nice!
You are not asking the question, “What will the future be?” without any idea of what some scenarios of the future depend on. After all, if the future is a nuclear holocaust, you can backtrack to at least some earlier point in time, for example, far enough to determine that nuclear weapons were detonated prior to the holocaust, and further to someone or something detonating them, and then maybe further to who had them, or why they detonated them, or that might be where gaps in knowledge appear.
Yes, I think this captures our difference in views pretty well: I do indeed think predictive accuracy is very valuable for decision quality. Of course, there are other skills/attributes that are also useful for making good decisions. Predicting the future seems pretty key though.
Ah, I think mybe there is/was a misunderstanding here. I don’t reject the claim that the forecasters are (much) better on average when using probabilities than when refusing to do so. I think my point here is that the questions we’re talking about (what would be the full set of important* consequences of nuclear first-use or the full set of important* consequences of nuclear risk reduction interventions X and Z) are not your standard, well-defined and soon-to-be-resolved forecasting questions. So in a sense, the very fact that the questions at issue cannot be part of a forecasting experiment is one of the main reasons for why I think they are so deeply uncertain and hard to answer with more than intuitive guesswork (if they could be part of a forecasting experiment, people could test and train their skills at assigning probabilities by answering many such questions, in which case I guess I would be more amenable to the claim that assigning probabilities can be useful). The way I understood our disagreement, it was not about the predictive performance of actors who do vs don’t (always) use probabilities, but rather about their decision quality. I think the actual disagreement may be that I think that there is a significant difference (for some decisions, high decision quality is not a neat function of explicit predictive ability), whereas you might be close to equating the two?
[*by “full set” I mean that this is supposed to include indirect/second-order consequences]
That said, I can’t, unfortunately, think of any alternative ways to resolve the disagreement regarding the decision quality of people using vs. refusing to use probabilities in situations where assessing the effects of a decision/action after the fact is highly difficult… (While the comment added by Noah Scales contains some interesting ideas, I don’t think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)
One other thing, because I forgot about that in my last response:
“FInally, I am not that sure of your internal history but one worry would be if you decided long ago intuitively based on the cultural milieu that the right answer is ‘the best intervention in nuclear policy is to try to prevent first use’ and then subconsciously sought out supporting arguments. I am not saying this is what happened or that you are any more guilty of this than me or anyone else, just that it is something I and we all should be wary of.”
-> I think this is a super important point, actually, and agree that it’s a concern that should be kept in mind when reading my essay on this topic. I did have the intuitive aversion against focusing on tail end risks before I came up with all the supporting arguments; basically, this post came about as a result of me asking myself “Why do I think it’s such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?” I added a footnote to be more transparent about this towards the beginning of the post (fn. 2). Thanks for raising it!
Sarah, you wrote:
Yes, that’s right that my suggestions let you assess predictive success, in some cases, for example, over a set of futures partitioning a space of possibilities. Since the futures partition the space, one of them will occur, the rest will not. A yes/no forecast works this way.
Actually, if you have any question about a future at a specific time about which you feel uncertain, you can phrase it as a yes/no question. You then partition the space of possibilities at that future time. Now you can answer the question, and test your predictive success. Whether your answer has any value is the concern.
However, one option I mentioned is to list contingencies that, if present, result in contingent situations (futures). That is not the same as predicting the future, since the contingencies don’t have to be present or identified (EDIT: in the real world, ie as facts), and you do not expect their contingent futures otherwise.
Deep uncertainty is usually taken as responding to those contingent situations as meaningful anyway. As someone without predictive information, you can only start offering models, like:
You can talk about scenarios, but you don’t know or haven’t seen their predictive indicators.
You can discuss contingent situations, but you can’t claim that they will occur.
You can still work to prevent those contingent situations, and that seems to be your intention in your area of research. For example, you can work to prevent current condition “A”, whatever that is. Nuclear proliferation, maybe, or deployment of battlefield nukes. Nice!
You are not asking the question, “What will the future be?” without any idea of what some scenarios of the future depend on. After all, if the future is a nuclear holocaust, you can backtrack to at least some earlier point in time, for example, far enough to determine that nuclear weapons were detonated prior to the holocaust, and further to someone or something detonating them, and then maybe further to who had them, or why they detonated them, or that might be where gaps in knowledge appear.
Yes, I think this captures our difference in views pretty well: I do indeed think predictive accuracy is very valuable for decision quality. Of course, there are other skills/attributes that are also useful for making good decisions. Predicting the future seems pretty key though.