The result of all this is that even with 150k simulations, the expected value calculations on any given run of the model (allowing a long future) will swing back and forth between positive and negative values. This is not to say that expected value is unknowable. Our model does even out once we’ve included billions of simulations. But the fact that it takes so many demonstrates that outcome results have extremely high variance and we have little ability to predict the actual value produced by any single intervention.
Do you have distributions in the denominator which can take the value of 0? In my Monte Carlo simulations, this has been the only cause of sign uncertainty in the expected cost-effectiveness. However, your model is more complex than the ones I have worked with, and should involve wider distributions, such that maybe you get instability even without distributions in the denominator which can take the value of 0.
The issue is that our parameters can lead to different rates of cubic population growth. A 1% difference in the rate of cubic growth can lead to huge differences over 50,000 years. Ultimately, this means that if the right parameter values dictating population are sampled in a situation in which the effect of the intervention is backfires, the intervention might have an average negative value across all the samples. With high enough variance, the average sign will be determined by the sign of the most extreme value. If xrisk mitigation work backfires in 1⁄4 of cases, we might expect 1⁄4 of collections of samples to have a negative mean.
I’d guess that this is because an x-risk intervention might have on the order of a 1⁄100,000 chance of averting extinction. So if you run 150k simulations, you might get 0 or 1 or 2 or 3 simulations in which the intervention does anything. Then there’s another part of the model for estimating the value of averting extinction, but you’re only taking 0 or 1 or 2 or 3 draws that matter from that part of the model because in the vast majority of the 150k simulations that part of the model is just multiplied by zero.
And if the intervention sometimes increases extinction risk instead of reducing it, then the few draws where the intervention matters will include some where its effect is very negative rather than very positive.
One way around this is to factor the model, and do 150k Monte Carlo simulations for the ‘value of avoiding extinction’ part of the model only. The part of the model that deals with how the intervention affects the probability of extinction could be solved analytically, or solved with a separate set of simulations, and then combined analytically with the simulated distribution of value of avoiding extinction. Or perhaps there’s some other way of factoring the model, e.g. factoring out the cases where the intervention has no effect and then running simulations on the effect of the intervention conditional on it having an effect.
Hi there,
Do you have distributions in the denominator which can take the value of 0? In my Monte Carlo simulations, this has been the only cause of sign uncertainty in the expected cost-effectiveness. However, your model is more complex than the ones I have worked with, and should involve wider distributions, such that maybe you get instability even without distributions in the denominator which can take the value of 0.
The issue is that our parameters can lead to different rates of cubic population growth. A 1% difference in the rate of cubic growth can lead to huge differences over 50,000 years. Ultimately, this means that if the right parameter values dictating population are sampled in a situation in which the effect of the intervention is backfires, the intervention might have an average negative value across all the samples. With high enough variance, the average sign will be determined by the sign of the most extreme value. If xrisk mitigation work backfires in 1⁄4 of cases, we might expect 1⁄4 of collections of samples to have a negative mean.
Thanks for clarifying, Derek!
I’d guess that this is because an x-risk intervention might have on the order of a 1⁄100,000 chance of averting extinction. So if you run 150k simulations, you might get 0 or 1 or 2 or 3 simulations in which the intervention does anything. Then there’s another part of the model for estimating the value of averting extinction, but you’re only taking 0 or 1 or 2 or 3 draws that matter from that part of the model because in the vast majority of the 150k simulations that part of the model is just multiplied by zero.
And if the intervention sometimes increases extinction risk instead of reducing it, then the few draws where the intervention matters will include some where its effect is very negative rather than very positive.
One way around this is to factor the model, and do 150k Monte Carlo simulations for the ‘value of avoiding extinction’ part of the model only. The part of the model that deals with how the intervention affects the probability of extinction could be solved analytically, or solved with a separate set of simulations, and then combined analytically with the simulated distribution of value of avoiding extinction. Or perhaps there’s some other way of factoring the model, e.g. factoring out the cases where the intervention has no effect and then running simulations on the effect of the intervention conditional on it having an effect.
That makes sense to me, Dan!