Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that’s a fruitless and impossible endeavour. History is not an iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge at the time. We cannot pretend to know what these will be. Even if you were to gather all the available evidence we have on nuclear near misses, and generate some sort of probability based on this, the answer would look something like:
“Assuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x”.
We can debate the merits of such a model, but I think it’s clear that it would be of limited use.
None of this is to say that we shouldn’t be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.
You say that “there are good arguments for working on the threat of nuclear war”. As I understand your argument, you also say we cannot rationally distinguish between the claim “the chance of nuclear war in the next 100 years is 0.00000001%” and the claim “the chance of nuclear war in the next 100 years is 1%”. If you can’t rationally put probabilities on the risk of nuclear war, why would you work on it?
Why are probabilities prior to action—why are they so fundamental? Could Andrew Wiles “rationally put probabilities” on him solving Fermat’s Last Theorem? Does this mean he shouldn’t have worked on it? Arguments do not have to be in number form.
If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don’t see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?
To me, the fundamental point isn’t probabilities, it’s that you need to make a choice about what you do. If I have the option to give a $1mn grant to preventing nuclear war or give the grant to something else, then no matter what I do, I have made a choice. And so, I need to have a decision theory for making a choice here.
And to me, subjective probabilities and Bayesian epistemology more generally, are by far the best decision theory I’ve come across for making choices under uncertainty. If there’s a 1% chance of nuclear war, the grant is worth making, if there’s a 10^-15 chance of nuclear war, the grant is not worth making. I need to make a decision, and so probabilities are fundamental, because they are my tool for making a decision.
And there are a bunch of important question where we don’t have data, and there’s no reasonable way to get data (eg, nuclear war!). And any approach which rejects the ability to reason under uncertainty in situations like this, is essentially the decision theory of “never make speculative grants like this”. And I think this is a clearly terrible decision theory (though I don’t think you’re actually arguing for this policy?)
None of this is to say that we shouldn’t be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.
Can you give some examples? I expect that someone could respond “That could be too unlikely to matter enough” to each of them, since we won’t have good enough data.
Sure—Nukes exist. They’ve been deployed before, and we know they have incredible destructive power. We know that many countries have them, and have threatened to use them. We know the protocols are in place for their use.
To me this seems like you’re making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?
“Assuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x”.
We can debate the merits of such a model, but I think it’s clear that it would be of limited use.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn’t GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectly’s effects, but the “recent” studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Where’s the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly?
To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El Niño for deworming, although I think this is the most suspect assumption I’ve seen them make.
See Eva Vivalt’s research on generalization (in the Causal Inference section) or her talk here.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn’t GiveWell make similar assumptions about the impacts of most of their recommended charities?
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you’re definitely right, we should always be skeptical of social science research—indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But there’s a massive difference between the assumptions driving GiveWell’s models and the assumptions required in the nuclear threat example.
Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that’s a fruitless and impossible endeavour. History is not an iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge at the time. We cannot pretend to know what these will be. Even if you were to gather all the available evidence we have on nuclear near misses, and generate some sort of probability based on this, the answer would look something like:
“Assuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x”.
We can debate the merits of such a model, but I think it’s clear that it would be of limited use.
None of this is to say that we shouldn’t be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.
You say that “there are good arguments for working on the threat of nuclear war”. As I understand your argument, you also say we cannot rationally distinguish between the claim “the chance of nuclear war in the next 100 years is 0.00000001%” and the claim “the chance of nuclear war in the next 100 years is 1%”. If you can’t rationally put probabilities on the risk of nuclear war, why would you work on it?
Why are probabilities prior to action—why are they so fundamental? Could Andrew Wiles “rationally put probabilities” on him solving Fermat’s Last Theorem? Does this mean he shouldn’t have worked on it? Arguments do not have to be in number form.
If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don’t see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?
To me, the fundamental point isn’t probabilities, it’s that you need to make a choice about what you do. If I have the option to give a $1mn grant to preventing nuclear war or give the grant to something else, then no matter what I do, I have made a choice. And so, I need to have a decision theory for making a choice here.
And to me, subjective probabilities and Bayesian epistemology more generally, are by far the best decision theory I’ve come across for making choices under uncertainty. If there’s a 1% chance of nuclear war, the grant is worth making, if there’s a 10^-15 chance of nuclear war, the grant is not worth making. I need to make a decision, and so probabilities are fundamental, because they are my tool for making a decision.
And there are a bunch of important question where we don’t have data, and there’s no reasonable way to get data (eg, nuclear war!). And any approach which rejects the ability to reason under uncertainty in situations like this, is essentially the decision theory of “never make speculative grants like this”. And I think this is a clearly terrible decision theory (though I don’t think you’re actually arguing for this policy?)
Can you give some examples? I expect that someone could respond “That could be too unlikely to matter enough” to each of them, since we won’t have good enough data.
Sure—Nukes exist. They’ve been deployed before, and we know they have incredible destructive power. We know that many countries have them, and have threatened to use them. We know the protocols are in place for their use.
To me this seems like you’re making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn’t GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectly’s effects, but the “recent” studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Where’s the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly?
To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El Niño for deworming, although I think this is the most suspect assumption I’ve seen them make.
See Eva Vivalt’s research on generalization (in the Causal Inference section) or her talk here.
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you’re definitely right, we should always be skeptical of social science research—indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But there’s a massive difference between the assumptions driving GiveWell’s models and the assumptions required in the nuclear threat example.