Others have enumerated many reservations with this critique, which I agree with. Here I’ll give several more.
why isn’t the “1000x” calculation actually spelled out?
As you’ve seen, given Rethink’s moral weights, many plausible choices for the remaining “made-up” numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn’t commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I don’t want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethink’s moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I’m not sure there’s a better approach without more information about the input distributions.)
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That’s not the same as the bridge having a 5% chance of falling each year—the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
let’s not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
As something of an aside, I think this general point was demonstrated and visualised well here.
wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism
I wasn’t familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
I don’t think that’s true either.
If you’re multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I don’t think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better. Maybe I should’ve fudged it down by one OOD.
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
import random
MU = 100 SIGMA = 10 N_SAMPLES = 10 ** 6 TARGET_QUANTILE = 0.05 INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)
samples.sort() # The sampled 5th percentile product product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)] implied_individual_quantile = product_quantile ** (1/3) implied_individual_quantile # ~90, which is the *16th* percentile by the empirical rule
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better.
This is completely fair, and I’m sorry if my previous reply seemed accusatory or like it was piling on. If I were you, I’d probably caveat your analysis’s conclusion to something more like “Under RP’s 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventions”.
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here I’ll give several more.
As you’ve seen, given Rethink’s moral weights, many plausible choices for the remaining “made-up” numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn’t commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I don’t want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethink’s moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That’s not the same as the bridge having a 5% chance of falling each year—the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
As something of an aside, I think this general point was demonstrated and visualised well here.
Disclaimer: I work RP so may be biased.
I wasn’t familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
I don’t think that’s true either.
If you’re multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I don’t think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better. Maybe I should’ve fudged it down by one OOD.
Thanks for being charitable :)
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
This is completely fair, and I’m sorry if my previous reply seemed accusatory or like it was piling on. If I were you, I’d probably caveat your analysis’s conclusion to something more like “Under RP’s 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventions”.