Others have enumerated many reservations with this critique, which I agree with. Here Iāll give several more.
why isnāt the ā1000xā calculation actually spelled out?
As youāve seen, given Rethinkās moral weights, many plausible choices for the remaining āmade-upā numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didnāt commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I donāt want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethinkās moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but Iām not sure thereās a better approach without more information about the input distributions.)
Sadly, I donāt think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentilesāin fact, in general, itās going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. Thatās not the same as the bridge having a 5% chance of falling each yearāthe chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
letās not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
Sadly, I donāt think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentilesāin fact, in general, itās going to be a product of much higher percentiles (20+).
As something of an aside, I think this general point was demonstrated and visualised well here.
wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism
I wasnāt familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
Sadly, I donāt think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentilesāin fact, in general, itās going to be a product of much higher percentiles (20+).
I donāt think thatās true either.
If youāre multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I donāt think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasnāt correct. And with the information on hand I canāt see how I couldāve done better. Maybe I shouldāve fudged it down by one OOD.
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
import random
MU = 100
SIGMA = 10
N_SAMPLES = 10 ** 6
TARGET_QUANTILE = 0.05
INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)
samples = []
for _ in range(N_SAMPLES):
r1 = random.gauss(MU, SIGMA)
r2 = random.gauss(MU, SIGMA)
r3 = random.gauss(MU, SIGMA)
sample = r1 * r2 * r3
samples.append(sample)
samples.sort()
# The sampled 5th percentile product
product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)]
implied_individual_quantile = product_quantile ** (1/3)
implied_individual_quantile # ~90, which is the *16th* percentile by the empirical rule
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
I did explicitly say that my calculation wasnāt correct. And with the information on hand I canāt see how I couldāve done better.
This is completely fair, and Iām sorry if my previous reply seemed accusatory or like it was piling on. If I were you, Iād probably caveat your analysisās conclusion to something more like āUnder RPās 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventionsā.
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here Iāll give several more.
As youāve seen, given Rethinkās moral weights, many plausible choices for the remaining āmade-upā numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didnāt commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I donāt want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethinkās moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
Sadly, I donāt think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentilesāin fact, in general, itās going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. Thatās not the same as the bridge having a 5% chance of falling each yearāthe chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
As something of an aside, I think this general point was demonstrated and visualised well here.
Disclaimer: I work RP so may be biased.
I wasnāt familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
I donāt think thatās true either.
If youāre multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I donāt think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasnāt correct. And with the information on hand I canāt see how I couldāve done better. Maybe I shouldāve fudged it down by one OOD.
Thanks for being charitable :)
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
This is completely fair, and Iām sorry if my previous reply seemed accusatory or like it was piling on. If I were you, Iād probably caveat your analysisās conclusion to something more like āUnder RPās 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventionsā.