I’m a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I’d expect the confidence intervals to be massive.
I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.
Also, if you’re aware of Rethink Priorities/Luisa Rodriguez’s work on modelling the odds and impacts of nuclear war (e.g., here), I’d be interested to hear whether you think making parameter estimates was worthwhile in that case. (And perhaps, if so, whether you think you’d have predicted that beforehand, vs being surprised that there ended up being a useful product.)
This is because that seems like the most similar existing piece of work I’m aware of (in methodology rather than topic). And to me it seems like that project was probably worthwhile, including the parameter estimates, and that it provided outputs that are perhaps more useful and less massively uncertain than I would’ve predicted. And that seems like weak evidence that parameter estimates could be worthwhile in this case as well.
Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be:
the most time-consuming step (if a relatively thorough/rigorous approach is attempted)
the least insight-providing step (since uncertainty would likely remain very large)
If that’s the case, this would also reduce the extent to which this model could “plausibly inform our point estimates” and “narrow our uncertainty”. Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).
That said, if one goes to the effort of building a model of this, it seems to me like it’s likely at least worth doing something like:
surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
inputting those estimates
see what outputs that suggests, and more importantly perform sensitivity analyses
thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/or getting more experts’ views on
And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/more rigorous investigation of the parameters where that seems most valuable.
Any thoughts on whether that seems worthwhile?
[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.
I’m a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I’d expect the confidence intervals to be massive.
I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.
Also, if you’re aware of Rethink Priorities/Luisa Rodriguez’s work on modelling the odds and impacts of nuclear war (e.g., here), I’d be interested to hear whether you think making parameter estimates was worthwhile in that case. (And perhaps, if so, whether you think you’d have predicted that beforehand, vs being surprised that there ended up being a useful product.)
This is because that seems like the most similar existing piece of work I’m aware of (in methodology rather than topic). And to me it seems like that project was probably worthwhile, including the parameter estimates, and that it provided outputs that are perhaps more useful and less massively uncertain than I would’ve predicted. And that seems like weak evidence that parameter estimates could be worthwhile in this case as well.
Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be:
the most time-consuming step (if a relatively thorough/rigorous approach is attempted)
the least insight-providing step (since uncertainty would likely remain very large)
If that’s the case, this would also reduce the extent to which this model could “plausibly inform our point estimates” and “narrow our uncertainty”. Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).
That said, if one goes to the effort of building a model of this, it seems to me like it’s likely at least worth doing something like:
surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
inputting those estimates
see what outputs that suggests, and more importantly perform sensitivity analyses
thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/or getting more experts’ views on
And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/more rigorous investigation of the parameters where that seems most valuable.
Any thoughts on whether that seems worthwhile?
[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.