I’m a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
Ariel Simnegar
Thanks so much David! :)
Agreed on avoiding harming insects!
Though it’s commendable to try to help insects, putting a bug in the trash might be negative, because that increases insect populations, and insects might lead negative lives: https://www.simonknutsson.com/how-good-or-bad-is-the-life-of-an-insect
Avoiding silk, shellac, and carmine also helps reduce suffering for many insects: https://www.wikihow.fitness/Avoid-Hurting-Insects
Thanks for the compliment :)
When I write “skepticism of formal philosophy”, I more precisely mean “skepticism that philosophical principles can capture all of what’s intuitively important”. Here’s an example of skepticism of formal philosophy from Scott Alexander’s review of What We Owe The Future:
I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is “anti-intellectual” and “defeating the entire point of philosophy”.
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my post’s analogy is crude and there are many reasons why people’s dispositions might favor AI x-risk reduction over animal welfare.
Thanks Gage!
That’s a good point I hadn’t considered! I don’t think that’s OP’s crux, but it is a coherent explanation of their neartermist cause prioritization.
Absolutely! Most of what’s important in this essay is just a restatement of your inspiring CEA from months ago :)
This extra context makes the case much stronger.
Thanks for being charitable :)
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
import random
MU = 100
SIGMA = 10
N_SAMPLES = 10 ** 6
TARGET_QUANTILE = 0.05
INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)samples = []
for _ in range(N_SAMPLES):
r1 = random.gauss(MU, SIGMA)
r2 = random.gauss(MU, SIGMA)
r3 = random.gauss(MU, SIGMA)
sample = r1 * r2 * r3
samples.append(sample)samples.sort()
# The sampled 5th percentile product
product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)]
implied_individual_quantile = product_quantile ** (1/3)
implied_individual_quantile # ~90, which is the *16th* percentile by the empirical ruleI apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better.
This is completely fair, and I’m sorry if my previous reply seemed accusatory or like it was piling on. If I were you, I’d probably caveat your analysis’s conclusion to something more like “Under RP’s 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventions”.
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here I’ll give several more.
why isn’t the “1000x” calculation actually spelled out?
As you’ve seen, given Rethink’s moral weights, many plausible choices for the remaining “made-up” numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn’t commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I don’t want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethink’s moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I’m not sure there’s a better approach without more information about the input distributions.)
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That’s not the same as the bridge having a 5% chance of falling each year—the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
let’s not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
Hi Emily,
Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.
our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post – within one order of magnitude, not three.
Holden has stated that “It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness.” As OP continues researching moral weights, OP’s marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?
Our current moral weights, based in part on Luke Muehlhauser’s past work, are lower.
Along with OP’s neartermist cause prioritization, your comment seems to imply that OP’s moral weights are 1-2 orders of magnitude lower than Rethink’s. If that’s true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between FAW and GHW.
Does OP plan to reveal their moral weights and/or their methodology for deriving them? It seems that opening up the conversation would be quite beneficial to OP’s objective of furthering moral weight research until uncertainty is reduced enough to act upon.
I’d like to reiterate how much I appreciate your openness to feedback and your reply’s clarification of OP’s disagreements with my post. That said, this reply doesn’t seem to directly answer this post’s headline questions:
How much weight does OP’s theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
Precisely how much more does OP value one unit of a human’s welfare than one unit of another animal’s welfare, just because the former is a human? How does OP derive this tradeoff?
How would OP’s views have to change for OP to prioritize animal welfare in neartermism?
Though you have no obligation to directly answer these questions, I really wish you would. A transparent discussion could update OP, Rethink, and many others on this deeply important topic.
Thanks again for taking the time to engage, and for everything you and OP have done to help others :)
Open Phil Should Allocate Most Neartermist Funding to Animal Welfare
I didn’t cite a single study—I cited a comment which referenced several studies, and quoted one of them.
I agree with your caveat about neuron counts, though I still think people should update upon an order of magnitude difference in neuron count. Do you have a better proposal for comparing the moral worth of a human fetus and an adult chicken?
I think the argument that abortion reduction doesn’t measure up to animal welfare in importance is an isolated demand for rigor. I agree that the best animal welfare interventions are orders of magnitude more cost-effective than the best abortion reduction interventions. However, you could say the same for GiveWell top charities, Charity Entrepreneurship global health charities, or any other charity in global health.
A more precise reference class would be global health charities that reduce child mortality, like AMF.
Denial of fetal personhood typically leads to implausible conclusions regarding how we may treat infants and severely disabled humans, and arguably to a denial of human equality even among non-disabled adults. Even if these conclusions are accepted, most people would accept that these are appreciable bullets to bite – especially for those effective altruists who are invested in preventing infant mortality.
Agreed. In fact, animal-inclusive altruists make very similar arguments for why animals merit moral consideration. As Dale points out in “Blind Spots: Compartmentalizing”, it’s unclear why these arguments from marginal cases would apply for animals and not for fetuses.
Arguments that abortion is permissible even if the child has full moral status typically rely on the claims that abortion is letting die, rather than killing, and that there is no duty to assist the child to rescue it from death.
Not only is it killing—if the fetus is sentient, it’s likely quite painful. Here are some descriptions of surgical abortion methods:
Labor Induction (20+ weeks gestation): The fetus is administered a lethal injection with no anesthesia, often of potassium chloride, which causes cardiac arrest and death within a minute. Potassium chloride is also used (with anesthesia) for the death penalty. If the fetus is sentient, this is “excruciatingly painful” because potassium chloride “inflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart.”
Dilation & Evacuation (13-24 weeks): The fetus’s arms and legs are torn off by forceps before the fetus’s head is crushed.
(As titotal points out, the vast majority of abortions occur before this. Still, hundreds of thousands of these surgical procedures occur each year.)
Thomson’s violinist
A further objection to Thomson’s violinist is that the person wakes up with the violinist attached through no action of their own. In cases of consensual sex, the risk that conception could occur is known. A more precise analogy would be rolling a die while knowing that if the die lands on 1, then the violinist will be attached to you. In that case, unplugging the violinist seems wrong.
According to the sources on wikipedia, Brain synapses in foetuses do not form until week 17, and the first evidence of “minimal consciousness and ability to feel pain” does not occur until week 30.
This comment from a pro-choice author on my post on abortion discusses lines of evidence for the different views on when fetal pain arises. It seems to corroborate Calum’s perspective that Wikipedia editors are biased. From one of its linked studies (Derbyshire et al): “Overall, the evidence, and a balanced reading of that evidence, points towards an immediate and unreflective pain experience mediated by the developing function of the nervous system from as early as 12 weeks.”
Even if we grant some moral weight to a 15 week old foetus (which I’m dubious of), it’s hard to see a logical reason why it would approach the morally significance of an adult chicken.
A 15-week fetus has an order of magnitude more neurons than an adult chicken. (Red junglefowl, the wild relative of chickens, have 221 million neurons, while 13-week fetuses have 3 billion brain cells. Since humans have a near 1:1 neuron-glia ratio, a 13-week fetus’s neuron count should be an order of magnitude greater than a chicken’s.) A chicken also has an underdeveloped cortex relative to mammals, which somewhat corresponds to the fetus’s developing cortex.
If anything, I’d bet in favor of a 15 week fetus having more moral significance than an adult chicken rather than less.
organ transplant is a systemic problem and by donating you are helping kickstart a trend that fixes the system. However, having more kidney donors, while a boost in overall QALY equivalent to donating a few thousand dollars, is more than likely to harm people who need kidney transplants in the long run.
...
By addressing the organ transplant problem now, you are actively diminishing the pool of money and the pool of candidates for teams working to improve organ transplants.
Thanks! Might be good to also edit your post to put this summary at the top so that readers immediately see it.
+1 to the interest in these reading lists.
Because my job is very time-consuming, I haven’t spent much time trying to understand the state of the art in AI risk. If there was a ready-made reading list I could devote 2-3 hours per week to, such that it’d take me a few months to learn the basic context of AI risk, that’d be great.
Yes, I agree with that caveat.
(Disclaimer: I take RP’s moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)
Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we’ve seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?
In “Worldview Diversification” (2016), Holden Karnofsky wrote that “If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF].” In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.
Let’s say RP’s moral weights are wrong by an order of magnitude, and chickens’ experiences actually only have 3% of the moral weight of human experiences. Let’s say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.
While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP’s research with respect to cause prioritization still hold up after incorporating the arguments you’ve enumerated in your post.
- 28 Sep 2023 15:18 UTC; 29 points) 's comment on Weighing Animal Worth by (
I appreciate that, and I agree with you!
However, as far as I’m aware, EA-recommended family planning interventions do decrease the amount of children people have. If these charities benefit farmed animals (and I believe they do), decreasing the human population is where these charities’ benefits for farmed animals come from.
I’ve estimated that both MHI and FEM prevent on the order of 100 pregnancies for each maternal life they save. Unless my estimates are way too high (please let me know if they’re wrong; I’m happy to update!), even if only a very small percentage of these pregnancies would have resulted in counterfactual births, both of these charities would still on net decrease the amount of children people have.
It’s noteworthy that if the procreation asymmetry is rejected, the sign of family planning interventions is the opposite of the sign of lifesaving interventions like AMF. Thus, those who support AMF might not support family planning interventions, and vice versa.
Comparing area was intended :)
If it’s unclear, I can add a note which says the circles should be compared by area.