âAssuming that in 2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is xâ.
We can debate the merits of such a model, but I think itâs clear that it would be of limited use.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesnât GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectlyâs effects, but the ârecentâ studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Whereâs the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly?
To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El NiĂąo for deworming, although I think this is the most suspect assumption Iâve seen them make.
See Eva Vivaltâs research on generalization (in the Causal Inference section) or her talk here.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesnât GiveWell make similar assumptions about the impacts of most of their recommended charities?
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/âfeasibility of the assumptions. So youâre definitely right, we should always be skeptical of social science researchâindeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But thereâs a massive difference between the assumptions driving GiveWellâs models and the assumptions required in the nuclear threat example.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesnât GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectlyâs effects, but the ârecentâ studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Whereâs the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly?
To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El NiĂąo for deworming, although I think this is the most suspect assumption Iâve seen them make.
See Eva Vivaltâs research on generalization (in the Causal Inference section) or her talk here.
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/âfeasibility of the assumptions. So youâre definitely right, we should always be skeptical of social science researchâindeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But thereâs a massive difference between the assumptions driving GiveWellâs models and the assumptions required in the nuclear threat example.