I think the approach you are suggesting is very much in line with the one of section âApplying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.â of this post from Holden Karnofsky.
The bottom line is that when one applies Bayesâs rule to obtain a distribution for cost-effectiveness based on (a) a normally distributed prior distribution (b) a normally distributed âestimate error,â one obtains a distribution with
Mean equal to the average of the two means weighted by their inverse variances
Variance equal to the harmonic sum of the two variances
I used to apply the above as (CE stands for cost-effectiveness, E for expected value, and V for variance):
E(âCEâ) = âweight of modelled effectsâ*E(âCE for modelled effectsâ) + âweight of non-modelled effectsâ*E(âCE for non-modelled effectsâ).
âWeight of modelled effectsâ = 1/âV(âCE for modelled effectsâ)/â(1/âV(âCE for modelled effectsâ) + 1/âV(âCE for non-modelled effectsâ)). This tends to 1 as the uncertainty of the non-modelled effects increases.
âWeight of non-modelled effectsâ = 1/âV(âCE for non-modelled effectsâ)/â(1/âV(âCE for modelled effectsâ) + 1/âV(âCE for non-modelled effectsâ)). This tends to 0 as the uncertainty of the non-modelled effects increases.
If the modelled effects are lives saved in the near term, and the non-modelled effects are the impact on the welfare of terrestrial arthropods (which are not modelled by GW), V(âCE for modelled effectsâ) << V(âCE for non-modelled effectsâ). So, based on the above, you are saying that we should give much more weight to the lives saved in the near term, and therefore these are the driver for the cost-effectiveness.
I believe the formula of the 1st bullet is not correct. I will try to illustrate with a sort of reversed Pascalâs mugging. Imagine there was one button which would destroy the whole universe with probability 50 % when pressed, and someone was considering whether to press it or not. For the sake of the argument, we can suppose the person would certainly (i.e. with probability of 100 %) be happy while pressing the button. Based on the formula of the 1st bullet, it looks like all weight would go to the pretty negligible effect on the person pressing the button, because it would be a certain effect. So the cost-effectiveness of pressing the button would be essentially driven by the effect on one single person as opposed to the consideration that the whole universe could end with likelihood 50 %. The argument works for any probability of universal destruction lower than 1 (e.g. 99.99 %), so the example also implies null value of information for learning more about the impact of pressing the button. All of this seems pretty wrong.
However, I still think priors are valuable. If 2 restaurants have a rating of 4.5/â5, but one of the ratings is based on 1 review, and another on 1 k reviews, the restaurant with more reviews is most likely better (assuming a prior lower than 4.5).
So I think the formula is not right as I wrote it above, but is pointing to something valuable. I would say it can be corrected as follows:
E(âCEâ) = âweight of method 1â*E(âCE for method 1â) + âweight of method 2â*E(âCE for method 2â).
I do not have a clear approach to estimate the weights, but I think they should account not only for uncertainty, but also for their scale. Inverse-variance weighting appears to be a good approach if all methods output estimates for the same variable (such as in a meta-analysis). For cost-effectiveness analyses, I suppose the relevant variable is total cost-effectiveness. This encompasses near term effects on people, but also near term effects on animals, and long term effects. Since the scope of GWâs estimates for lives saved differs from that of my estimates for the impact on terrestrial arthropods, I believe we cannot directly apply inverse-variance weighting.
It is not reasonable to press a button which may well destroy the whole universe for the sake of being happy for certain. In the same way, but to a much smaller extent, I do not think we can conclude GWâs top charities are robustly cost-effective just because we are pretty certain about their near terms effects on people. We arguably have to investigate (decrease uncertainty, and increase resilience) about the other effects, such as those on animals, and the consequences of changing population size (which have apparently not been figured out; see comments here).
One issue here is that the same objection could potentially be applied to longtermist-focused charities, but I actually donât think this is true. I think (say) working in government to reduce the risk of biological weapons is actually far more robustly positive than trying to improve insect welfare by reducing deforestation. It also seems like the value of the far future could be far greater than the impact on present-day insects.
I agree efforts around pandemic preparedness are more robustly positive than those targetting insect welfare. 2 strong arguments come to mind:
It looks like at least some projects (e.g. developping affordable super PPE) are robustly good to decrease extinction risks, and I think extinction is robustly bad.
Extinction risks are pretty large in scale, and so they will tend to be a more important driver of the total cost-effectiveness. This is not necessarily the case for efforts on improving insect welfare. They might e.g. unintendly cause people to think that nature /â wild life is intrinsically good/âbad, and this may plausibly shape how people think about spreading (or not) wild life beyond Earth, which may be the driver of the total cost-effectiveness.
Hi Lucas,
Thanks for engaging!
I think the approach you are suggesting is very much in line with the one of section âApplying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.â of this post from Holden Karnofsky.
I used to apply the above as (CE stands for cost-effectiveness, E for expected value, and V for variance):
E(âCEâ) = âweight of modelled effectsâ*E(âCE for modelled effectsâ) + âweight of non-modelled effectsâ*E(âCE for non-modelled effectsâ).
âWeight of modelled effectsâ = 1/âV(âCE for modelled effectsâ)/â(1/âV(âCE for modelled effectsâ) + 1/âV(âCE for non-modelled effectsâ)). This tends to 1 as the uncertainty of the non-modelled effects increases.
âWeight of non-modelled effectsâ = 1/âV(âCE for non-modelled effectsâ)/â(1/âV(âCE for modelled effectsâ) + 1/âV(âCE for non-modelled effectsâ)). This tends to 0 as the uncertainty of the non-modelled effects increases.
If the modelled effects are lives saved in the near term, and the non-modelled effects are the impact on the welfare of terrestrial arthropods (which are not modelled by GW), V(âCE for modelled effectsâ) << V(âCE for non-modelled effectsâ). So, based on the above, you are saying that we should give much more weight to the lives saved in the near term, and therefore these are the driver for the cost-effectiveness.
I believe the formula of the 1st bullet is not correct. I will try to illustrate with a sort of reversed Pascalâs mugging. Imagine there was one button which would destroy the whole universe with probability 50 % when pressed, and someone was considering whether to press it or not. For the sake of the argument, we can suppose the person would certainly (i.e. with probability of 100 %) be happy while pressing the button. Based on the formula of the 1st bullet, it looks like all weight would go to the pretty negligible effect on the person pressing the button, because it would be a certain effect. So the cost-effectiveness of pressing the button would be essentially driven by the effect on one single person as opposed to the consideration that the whole universe could end with likelihood 50 %. The argument works for any probability of universal destruction lower than 1 (e.g. 99.99 %), so the example also implies null value of information for learning more about the impact of pressing the button. All of this seems pretty wrong.
However, I still think priors are valuable. If 2 restaurants have a rating of 4.5/â5, but one of the ratings is based on 1 review, and another on 1 k reviews, the restaurant with more reviews is most likely better (assuming a prior lower than 4.5).
So I think the formula is not right as I wrote it above, but is pointing to something valuable. I would say it can be corrected as follows:
E(âCEâ) = âweight of method 1â*E(âCE for method 1â) + âweight of method 2â*E(âCE for method 2â).
I do not have a clear approach to estimate the weights, but I think they should account not only for uncertainty, but also for their scale. Inverse-variance weighting appears to be a good approach if all methods output estimates for the same variable (such as in a meta-analysis). For cost-effectiveness analyses, I suppose the relevant variable is total cost-effectiveness. This encompasses near term effects on people, but also near term effects on animals, and long term effects. Since the scope of GWâs estimates for lives saved differs from that of my estimates for the impact on terrestrial arthropods, I believe we cannot directly apply inverse-variance weighting.
It is not reasonable to press a button which may well destroy the whole universe for the sake of being happy for certain. In the same way, but to a much smaller extent, I do not think we can conclude GWâs top charities are robustly cost-effective just because we are pretty certain about their near terms effects on people. We arguably have to investigate (decrease uncertainty, and increase resilience) about the other effects, such as those on animals, and the consequences of changing population size (which have apparently not been figured out; see comments here).
I agree efforts around pandemic preparedness are more robustly positive than those targetting insect welfare. 2 strong arguments come to mind:
It looks like at least some projects (e.g. developping affordable super PPE) are robustly good to decrease extinction risks, and I think extinction is robustly bad.
Extinction risks are pretty large in scale, and so they will tend to be a more important driver of the total cost-effectiveness. This is not necessarily the case for efforts on improving insect welfare. They might e.g. unintendly cause people to think that nature /â wild life is intrinsically good/âbad, and this may plausibly shape how people think about spreading (or not) wild life beyond Earth, which may be the driver of the total cost-effectiveness.