whatās the procedure for going the other wayāfor extracting an implied annual risk based on a risk over a long timeframe (and an assumed credence)? i guess youād model it in a pretty similar way.
this probably implies that many of the risk estimates we see in the wild are too high, right? i mean, thereās no way to really know, but i wouldnāt be surprised if many estimates on longer timeframes are actually projections of estimates on shorter timeframes, since the latter may be more manageable to reason about.
If you think thereās an exchangeable model underlying someone elseās long-run prediction, Iām not sure of a good way to try to figure it out. Off the top of my head, you could do something like this:
def model(a,b,conc_expert,expert_forecast): # forecasted distribution over annual probability of nuclear war prior_rate = numpyro.sample('rate',dist.Beta(a,b)) with numpyro.plate('w',1000): war = numpyro.sample('war',dist.Bernoulli(prior_rate),infer={'enumerate':'parallel'}) anywars = (war.reshape(10,100).sum(1)>1).mean() expert_prediction = numpyro.sample('expert',dist.Beta(conc_expert*anywars,conc_expert*(1-anywars)),obs=expert_forecast)
This is saying that the expert is giving you a noisy estimate of the 100 year rate of war occurrence, and then treating their estimate as an observation. I donāt really know how to think about how much noise to attribute to their estimate, and I wonder if thereās a better way to incorporate it. The noise level is given by the parameter conc_expert, see here for an explanation of the āconcentrationā parameter in the beta distribution.
I donāt know! I think in general if itās an estimate for (say) 100 year risk with ā 100 years of data (or evidence that is equivalently good), then you should at least be wary of this pitfall. If thereās >>100 years of data and itās a 100 year risk forecast, then the binomial calculation is pretty good.
this is great! two questions:
whatās the procedure for going the other wayāfor extracting an implied annual risk based on a risk over a long timeframe (and an assumed credence)? i guess youād model it in a pretty similar way.
this probably implies that many of the risk estimates we see in the wild are too high, right? i mean, thereās no way to really know, but i wouldnāt be surprised if many estimates on longer timeframes are actually projections of estimates on shorter timeframes, since the latter may be more manageable to reason about.
If you think thereās an exchangeable model underlying someone elseās long-run prediction, Iām not sure of a good way to try to figure it out. Off the top of my head, you could do something like this:
def model(a,b,conc_expert,expert_forecast):
# forecasted distribution over annual probability of nuclear war
prior_rate = numpyro.sample('rate',dist.Beta(a,b))
with numpyro.plate('w',1000):
war = numpyro.sample('war',dist.Bernoulli(prior_rate),infer={'enumerate':'parallel'})
anywars = (war.reshape(10,100).sum(1)>1).mean()
expert_prediction = numpyro.sample('expert',dist.Beta(conc_expert*anywars,conc_expert*(1-anywars)),obs=expert_forecast)
This is saying that the expert is giving you a noisy estimate of the 100 year rate of war occurrence, and then treating their estimate as an observation. I donāt really know how to think about how much noise to attribute to their estimate, and I wonder if thereās a better way to incorporate it. The noise level is given by the parameter conc_expert, see here for an explanation of the āconcentrationā parameter in the beta distribution.
I donāt know! I think in general if itās an estimate for (say) 100 year risk with ā 100 years of data (or evidence that is equivalently good), then you should at least be wary of this pitfall. If thereās >>100 years of data and itās a 100 year risk forecast, then the binomial calculation is pretty good.