Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be:
the most time-consuming step (if a relatively thorough/ârigorous approach is attempted)
the least insight-providing step (since uncertainty would likely remain very large)
If thatâs the case, this would also reduce the extent to which this model could âplausibly inform our point estimatesâ and ânarrow our uncertaintyâ. Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).
That said, if one goes to the effort of building a model of this, it seems to me like itâs likely at least worth doing something like:
surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
inputting those estimates
see what outputs that suggests, and more importantly perform sensitivity analyses
thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/âor getting more expertsâ views on
And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/âmore rigorous investigation of the parameters where that seems most valuable.
Any thoughts on whether that seems worthwhile?
[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.
Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be:
the most time-consuming step (if a relatively thorough/ârigorous approach is attempted)
the least insight-providing step (since uncertainty would likely remain very large)
If thatâs the case, this would also reduce the extent to which this model could âplausibly inform our point estimatesâ and ânarrow our uncertaintyâ. Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).
That said, if one goes to the effort of building a model of this, it seems to me like itâs likely at least worth doing something like:
surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
inputting those estimates
see what outputs that suggests, and more importantly perform sensitivity analyses
thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/âor getting more expertsâ views on
And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/âmore rigorous investigation of the parameters where that seems most valuable.
Any thoughts on whether that seems worthwhile?
[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.