what’s the procedure for going the other way—for extracting an implied annual risk based on a risk over a long timeframe (and an assumed credence)? i guess you’d model it in a pretty similar way.
this probably implies that many of the risk estimates we see in the wild are too high, right? i mean, there’s no way to really know, but i wouldn’t be surprised if many estimates on longer timeframes are actually projections of estimates on shorter timeframes, since the latter may be more manageable to reason about.
If you think there’s an exchangeable model underlying someone else’s long-run prediction, I’m not sure of a good way to try to figure it out. Off the top of my head, you could do something like this:
def model(a,b,conc_expert,expert_forecast): # forecasted distribution over annual probability of nuclear war prior_rate = numpyro.sample('rate',dist.Beta(a,b)) with numpyro.plate('w',1000): war = numpyro.sample('war',dist.Bernoulli(prior_rate),infer={'enumerate':'parallel'}) anywars = (war.reshape(10,100).sum(1)>1).mean() expert_prediction = numpyro.sample('expert',dist.Beta(conc_expert*anywars,conc_expert*(1-anywars)),obs=expert_forecast)
This is saying that the expert is giving you a noisy estimate of the 100 year rate of war occurrence, and then treating their estimate as an observation. I don’t really know how to think about how much noise to attribute to their estimate, and I wonder if there’s a better way to incorporate it. The noise level is given by the parameter conc_expert, see here for an explanation of the “concentration” parameter in the beta distribution.
I don’t know! I think in general if it’s an estimate for (say) 100 year risk with ⇐ 100 years of data (or evidence that is equivalently good), then you should at least be wary of this pitfall. If there’s >>100 years of data and it’s a 100 year risk forecast, then the binomial calculation is pretty good.
this is great! two questions:
what’s the procedure for going the other way—for extracting an implied annual risk based on a risk over a long timeframe (and an assumed credence)? i guess you’d model it in a pretty similar way.
this probably implies that many of the risk estimates we see in the wild are too high, right? i mean, there’s no way to really know, but i wouldn’t be surprised if many estimates on longer timeframes are actually projections of estimates on shorter timeframes, since the latter may be more manageable to reason about.
If you think there’s an exchangeable model underlying someone else’s long-run prediction, I’m not sure of a good way to try to figure it out. Off the top of my head, you could do something like this:
def model(a,b,conc_expert,expert_forecast):
# forecasted distribution over annual probability of nuclear war
prior_rate = numpyro.sample('rate',dist.Beta(a,b))
with numpyro.plate('w',1000):
war = numpyro.sample('war',dist.Bernoulli(prior_rate),infer={'enumerate':'parallel'})
anywars = (war.reshape(10,100).sum(1)>1).mean()
expert_prediction = numpyro.sample('expert',dist.Beta(conc_expert*anywars,conc_expert*(1-anywars)),obs=expert_forecast)
This is saying that the expert is giving you a noisy estimate of the 100 year rate of war occurrence, and then treating their estimate as an observation. I don’t really know how to think about how much noise to attribute to their estimate, and I wonder if there’s a better way to incorporate it. The noise level is given by the parameter conc_expert, see here for an explanation of the “concentration” parameter in the beta distribution.
I don’t know! I think in general if it’s an estimate for (say) 100 year risk with ⇐ 100 years of data (or evidence that is equivalently good), then you should at least be wary of this pitfall. If there’s >>100 years of data and it’s a 100 year risk forecast, then the binomial calculation is pretty good.