Is there any reason for you having decided to go for non-null probabilities of the interventions having no effect?
A zero effect reflects no difference in the value targeted by the intervention. For xrisk interventions, this means that no disaster was averted (even if the probability was changed). For animal welfare interventions, the welfare wasn’t changed by the intervention. Each intervention will have side effects that do matter, but those side effects will be hard to predict or occur on a much smaller scale. Non-profits pay salaries. Projects represent missed opportunity costs. Etc. Including them would add noise without meaningfully changing the results. We could use some scheme to flesh out these marginal effects, as you suggest, but it would take some care to figure out how to do so in a way that wasn’t arbitrary and potentially misleading. Do you see ways for this sort of change to be decision relevant?
It is also worth noting that assigning a large number of results to a single exact value makes certain computational shortcuts possible. More fine-grained assessments would only be feasible with fewer samples.
Less importanty, I also think the negative part of the effects distribution may have a different shape than the positive part. So the model would ideally allow one to specify not only the probability of the intervention being negative, but also the effects distribution conditional on the effect being negative (in the same way one can for the positive part).
Fair point. I agree that having separate settings would be more realistic. I’m not sure whether it would make a significant difference to the results to have the ability to set different shapes of positive and negative distributions, given the way these effects are sampled for an all-or-nothing verdict on whether the intervention makes a difference. However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.
Do you see ways for this sort of change to be decision relevant?
Nevermind. I think the model as is makes sense because it is more general. One can always specify a smaller probability of the intervention having no effect, and then account for other factors in the distribution of the positive effect.
However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.
Right. If it is not super easy to add, then I guess it is not worth it.
A zero effect reflects no difference in the value targeted by the intervention. For xrisk interventions, this means that no disaster was averted (even if the probability was changed). For animal welfare interventions, the welfare wasn’t changed by the intervention. Each intervention will have side effects that do matter, but those side effects will be hard to predict or occur on a much smaller scale. Non-profits pay salaries. Projects represent missed opportunity costs. Etc. Including them would add noise without meaningfully changing the results. We could use some scheme to flesh out these marginal effects, as you suggest, but it would take some care to figure out how to do so in a way that wasn’t arbitrary and potentially misleading. Do you see ways for this sort of change to be decision relevant?
It is also worth noting that assigning a large number of results to a single exact value makes certain computational shortcuts possible. More fine-grained assessments would only be feasible with fewer samples.
Fair point. I agree that having separate settings would be more realistic. I’m not sure whether it would make a significant difference to the results to have the ability to set different shapes of positive and negative distributions, given the way these effects are sampled for an all-or-nothing verdict on whether the intervention makes a difference. However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.
Thanks for the clarifications!
Nevermind. I think the model as is makes sense because it is more general. One can always specify a smaller probability of the intervention having no effect, and then account for other factors in the distribution of the positive effect.
Right. If it is not super easy to add, then I guess it is not worth it.