I’m a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
Ariel Simnegar
Open Phil Should Allocate Most Neartermist Funding to Animal Welfare
The Scale of Fetal Suffering in Late-Term Abortions
Insightful and well-argued post!
I found the hypothetical about NYT and CEA helpful for reasoning from first principles about acceptable journalistic practice. I came out of it empathizing more with Nonlinear’s feelings before and during the publication of Ben Pace’s article than I previously had.
Regarding Ben Pace’s explicit seeking of negative information and unwillingness to delay posting, you updated me from thinking of these as simple mistakes to now considering them egregiously bad.
Great point that an article author can’t just state their disclaimers at the top and expect readers to rationally recalibrate themselves and ignore the vibes of the evidence’s presentation.
I found it hard to update throughout this story because the presentation of evidence from both parties was (understandably) biased. As you pointed out, “Sharing Information About Nonlinear” presented sometimes true claims in a way which makes the reader unsympathetic to Nonlinear. Nonlinear’s response presented compelling rebuttals in a way which was calculated to increase the reader’s sympathy for Nonlinear. Both articles intentionally mix the evidence and the vibes in a way which makes it difficult to readers to separate the two. (I don’t blame Nonlinear’s response for this as much, since it was tit for tat.)
Thanks again for putting so much time and effort into this, and I’m excited to see what you write next.
Eliezer’s perspective on animal consciousness is especially frustrating because of the real harm it’s caused to rationalists’ openness to caring about animal welfare.
Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is “obviously” more important. (For a case study, just look at how this author’s post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the “implausibility” of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.
Unlike Eliezer’s overconfidence in physicalism and FDT, Eliezer’s overconfidence in animals not mattering has serious real-world effects. Eliezer’s views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, it’ll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.
Well stated. This post’s heart is in the right place, and I think some of its proposals are non-accidentally correct. However, it seems that many of the post’s suggestions boil down to “dilute what it means to be EA to just being part of common left-wing thought”. Here’s a sampling of the post’s recommendations which provoke this:
EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
EAs should not assume that we must attach a number to everything, and should be curious about why most academic and professional communities do not
EA institutions should select for diversity
Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
Subject-matter experts from outside EA
Researchers, practitioners, and stakeholders from outside of our elite communities
For instance, we need a far greater input from people from Indigenous communities and the Global South
EAs should consider the impact of EA’s cultural, historical, and disciplinary roots on its paradigmatic methods, assumptions, and prioritisations
Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality
Tobacco?
Gambling?
Mass surveillance?
Arms manufacturing?
Cryptocurrency?
Fossil fuels?
Within 5 years, EA funding decisions should be made collectively
EA institutions should be democratised within 3 years, with strategic, funding, and hiring policy decisions being made via democratic processes rather than by the institute director or CEO
EAs should make an effort to become more aware of EA’s cultural links to eugenic, reactionary and right-wing accelerationist politics, and take steps to identify areas of overlap or inheritance in order to avoid indirectly supporting such views or inadvertently accepting their framings
Thanks for this analysis, Vasco!
A recurring motif in your posts is your willingness to be explicit about your uncertainty regarding the sign of the net impact of certain cause areas’ interventions.
When playing the “game” of estimating an intervention’s net impact, EAs typically apply the set of “house rules” within an intervention’s cause area, and ignore “game extensions” which incorporate rules from other cause areas. Sadly, we often do this even when the “game extensions” involve crucial considerations which can and do flip the sign of interventions’ net impact. Examples:
Global health & development charity analyses often ignore population ethics. Under many reasonable beliefs in population ethics, much of these interventions’ impact is on their effect on human population size. This leads to situations where people fund lifesaving charities, which increase the human population, and also fund family planning charities, which reduce the human population.
These analyses often neglect the effect of changing human population size on farmed animals.
When charities in global health or farmed animal welfare incorporate that, they often neglect the intervention’s effect on wild animals or on the long-term future, either of which can utterly dominate.
You seem to believe that one can’t just play “house rules”—if you want to play this game properly, you have to include all of the game extensions, from the effect of malaria charities on the malaria-carrying mosquitos themselves to the effect of reducing x-risk on long-term future animal welfare. Otherwise, you risk making illegal moves and losing when you think you’re winning.
I think you should further defend this view, perhaps by writing a post which is explicit about it. If this is your view, then a sizeable portion of EA money is currently going to the incinerator each year. (Also, EAs are working against each other by donating to lifesaving charities and family planning charities, among many places.) If some EAs are convinced by your post to switch to your preferred charities, then taking the time to write your post will have been highly cost-effective.
Thanks for putting this post together. It takes fortitude to commit so much to an altruistic project, and it takes integrity to make this decision and write up this explanation.
A Case for Voluntary Abortion Reduction
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here I’ll give several more.
why isn’t the “1000x” calculation actually spelled out?
As you’ve seen, given Rethink’s moral weights, many plausible choices for the remaining “made-up” numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didn’t commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I don’t want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethink’s moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I’m not sure there’s a better approach without more information about the input distributions.)
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. That’s not the same as the bridge having a 5% chance of falling each year—the chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
let’s not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
I agree with much of your underlying frustration, Geoffrey, but I worry that explicitly anti-woke sentiment could encourage the perception that the woke are “not welcome here”.
So many people find and contribute to EA from woke/woke-adjacent circles like climate change activism, welfare capitalism, and animal welfare. Even if you and I disagree with their ideological views, they’re still trying to improve the world, the same way as you or I.
I hope that much the same way that EA is influenced by wokism, woke EAs are influenced by EA refine the ethics and epistemics of their ideology. I’d like to throw in a (perhaps naive) vote for not explicitly alienating woke EAs if at all possible.
I also think it’s quite reasonable for a religious person to give secular arguments for worldviews which also happen to be held in their religion.
For example, if Davis was making a humanistic argument for why people should take Giving What We Can’s 10% pledge, then accusing him of disingenuously trying to sneak in the “Catholic agenda” of giving a tithe to the poor doesn’t seem fair.
Or imagine if a Jain was giving a humanistic argument for why people should be vegetarian, and they were accused of disingenuously trying to sneak in the “Jain agenda” of animal welfare.
I didn’t cite a single study—I cited a comment which referenced several studies, and quoted one of them.
I agree with your caveat about neuron counts, though I still think people should update upon an order of magnitude difference in neuron count. Do you have a better proposal for comparing the moral worth of a human fetus and an adult chicken?
I think the argument that abortion reduction doesn’t measure up to animal welfare in importance is an isolated demand for rigor. I agree that the best animal welfare interventions are orders of magnitude more cost-effective than the best abortion reduction interventions. However, you could say the same for GiveWell top charities, Charity Entrepreneurship global health charities, or any other charity in global health.
A more precise reference class would be global health charities that reduce child mortality, like AMF.
organ transplant is a systemic problem and by donating you are helping kickstart a trend that fixes the system. However, having more kidney donors, while a boost in overall QALY equivalent to donating a few thousand dollars, is more than likely to harm people who need kidney transplants in the long run.
...
By addressing the organ transplant problem now, you are actively diminishing the pool of money and the pool of candidates for teams working to improve organ transplants.
Sure! I think “Most people endorse some form of ‘eugenics’” would fit EA forum norms better.
The sign of the effect of FEM seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.
In Kano, Anambra, and Ondo, FEM prevents one maternal death by preventing 281, 268, and 249 unintended pregnancies respectively. Even if only ~40% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, FEM’s intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for FEM to be net positive.
However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that “all proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objections”. Toby Ord writes in The Precipice p. 263 that “Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people.”
If there’s a significant probability that the person-affecting view may be false, then FEM’s effect could in reality be up to 100x as negative as its effect on mothers is positive.
Even if one rejects the person-affecting view, but supports FEM for its (definitely positive) effects on farmed animals, they should then be sure to not support lifesaving charities like AMF, which have the opposite effect on farmed animals. They should also find FEM saving mothers’ lives to be an unfortunate side-effect of FEM’s intervention, because saving mothers’ lives is bad for the farmed animals the mothers eat.
Also, more of the farmed animals helped by reducing the human population don’t exist yet and will be created in the future. So it’s curious that one would account for the interests of farmed animals that don’t exist yet, but ignore the interests of human beings that don’t exist yet. (To be fair, there are views like the procreation asymmetry which could justify this.)
On the whole, whether or not there’s a significant probability that the person-affecting view may be false seems to be a crucial consideration for the sign of the effect of family planning charities such as FEM and MHI. I’d be interested in how Rethink Priorities would approach incorporating moral uncertainty regarding the person-affecting view into its report on FEM.
Edit: Added “assuming the value of human experience is independent of the person-affecting view” for precision, as MichaelStJules pointed out.
- 29 Aug 2023 18:01 UTC; 5 points) 's comment on Rockwell’s Quick takes by (
Thank you Ben and Sarah for your post. Your commitment to saving and improving the lives of mothers in extreme poverty is very admirable.
we acknowledge that certain philosophical frameworks that prioritise utility maximisation may disagree with the impactfulness of family planning work.
These uncertainties may matter more than most people realize.
Many EAs believe that creating happy lives is good. Will MacAskill writes that “if your children have lives that are sufficiently good, then your decision to have them is good for them.”[1] Toby Ord writes that “Any plausible account of population ethics will involve…making sacrifices on behalf of merely possible people.”[2] Both Will and Toby place moral weight on the non-person-affecting view, where preventing the creation of a happy person is as bad as killing them!
Using moral uncertainty, let’s say there’s only a 1% chance that the non-person-affecting view is true. In sub-Saharan Africa, 37% of unintended pregnancies end in abortion, leading to 8.0 million abortions per year.[3] This implies there are 21.6 million unintended pregnancies per year, leading to 200k maternal deaths.[4] To prevent one maternal death, one would have to prevent 108 unintended pregnancies on average. Even with only a 1% chance that the non-person-affecting view is true, this intervention is still net negative, because it causes 1.08 deaths (in expectation) to avert one maternal death.
The non-person-affecting view is mainstream among longtermists, and even non-longtermists may be willing to grant a small probability that we should care about future people.
Possible Objections
Some prevented unintended pregnancies will be replaced by others, so preventing one unintended pregnancy doesn’t necessarily mean preventing one future person
This is a legitimate point, not factored into the above analysis, but all it does is just change one’s maximum credence in the non-person-affecting view for the intervention to not be net negative. 50% replacement means a <2% credence; 75% replacement means a <4% credence.
People in extreme poverty may live net negative lives
This is debatable. Either way, if one believes this, they’re arguing that many EA charities in global health and development are net harmful, because they save the lives of people in extreme poverty. This would also mean MHI’s purpose of saving the lives of mothers in extreme poverty is a bad thing.
Saving human lives is net harmful, because of the meat-eater problem
This is also debatable, because humans seem to reduce wild invertebrate populations, and there are many ethical arguments pointing to these invertebrates living net negative lives. There’s some reason to believe that this dominates our (horrific) treatment of farmed animals. As before, if you believe this, you should protest most EA charities in global health and development, and disagree with MHI’s purpose of saving the lives of mothers in extreme poverty.
How does MHI incorporate moral uncertainty into its analyses of the net impact of its interventions?
- 16 Apr 2023 0:44 UTC; 14 points) 's comment on My impact assessment of Giving What We Can by (
- 29 Aug 2023 18:01 UTC; 5 points) 's comment on Rockwell’s Quick takes by (
I’d like to give some context for why I disagree.
Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he’s admitted that “I truly sucked back then”. However, I think EA causes are more important than political differences. It’s valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we’re being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.
I also think Hanania has excellent takes on most issues, and that’s because he’s the most intellectually honest blogger I’ve encountered. I think Hanania likes EA because he’s willing to admit that he’s imperfect, unlike EA’s critics who would rather feel good about themselves than actually help others.
More broadly, I think we could be doing more to attract people who don’t hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:
In this era of political polarization, It would be a travesty for EA issues to become partisan.
All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people’s views are on racism and sexism. If we can’t be friends with the person next door with bad views, how are we going to make allies globally?
I would hope that in a community committed to impartiality, one need not have to make the case for why it’s worth caring about the welfare of beings that happen not to be members of our species
I think EA’s cause prioritization would look very different if it genuinely were a “community committed to impartiality” regarding species. Under impartiality, both of these interventions are on the order of 1000x as cost-effective as GiveWell top charities.[1] (One could avoid this conclusion by believing pleasure/pain only account for on the order of 0.1% of welfare, but this is a deeply unusual view and is empirically dubious.[2]) Open Phil (OP) has recognized this since 2016.[3]
However, to this day, OP has only allocated 17% of its annual neartermist funding to animal welfare.[4] If OP really believes animal welfare is ~1000x as cost-effective as GiveWell top charities, it’s difficult to understand how this allocation of funding could possibly be morally justified. Yes, many caveats could be made:
OP could find moral weight estimation methodologically dubious. (This would be strange given that OP funded RP’s moral weights project.[5])
OP could oppose outsize allocations to interventions which depend upon controversial views, such as being impartial about species membership. (This would be strange given that in 2017, 2019, and 2021, OP allocated a majority of longtermist funding to AI x-risk reduction, even though the view that AI is an x-risk is similarly controversial.)
As remarked above, OP could hold a deeply unusual view where almost none of welfare is accounted for by pleasure/pain, which would be quite inconsistent with the existing evidence.
OP could believe animal welfare has faster diminishing marginal returns than global health. This is probably true, but if OP believes animal welfare has ~1000x cost-effectiveness, it seems that OP would be trying to hyperaggressively grow the capacity of animal welfare charities more than it’s currently doing.
Why doesn’t OP allocate a majority of neartermist funding to animal welfare? I don’t know. My guess is that key decisionmakers aren’t “committed to impartiality” regarding species. Holden Karnofsky has said as much: “My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern.”[6]
(Meme by me)
So, what to do? For one, it would be extremely helpful for OP to clarify their views on the questions relevant to animal welfare (how much of welfare is explained by hedonism, should one be impartial regarding species), what the cruxes are that would change their minds regarding cause prioritization, and the counterpoints which explain why they haven’t changed their minds. (I’ll be publishing a post within the next few months with the above arguments.)
I wish you were right that EA is a “community committed to impartiality” regarding species. However, empirically, it seems that’s not the case.
- ^
Vasco Grilo (2023). “Prioritising animal welfare over global health and development?” https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and
- ^
Severe pain, such as cluster headaches, is associated with a greatly increased suicidality. Lee et al (2019). “Increased suicidality in patients with cluster headache”. https://pubmed.ncbi.nlm.nih.gov/31018651/
- ^
“If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x).” Holden Karnofsky (2017). “Worldview Diversification”. https://www.openphilanthropy.org/research/worldview-diversification/
- ^
Ariel Simnegar (2023). “Open Phil Grants Analysis”. https://github.com/ariel-simnegar/open-phil-grants-analysis/blob/main/open_phil_grants_analysis.ipynb
- ^
Open Philanthropy. “Rethink Priorities — Moral Patienthood and Moral Weight Research”. https://www.openphilanthropy.org/grants/rethink-priorities-moral-patienthood-and-moral-weight-research/
- ^
Holden Karnofsky (2017). “Radical Empathy”. https://www.openphilanthropy.org/research/radical-empathy/
“The human health gains are small relative to the harms to animals”
I’d argue further that even if the human health benefits are large in the space of human health outcomes, they are so tiny in comparison to the harm an omnivorous diet causes to animals that they are scarcely worth discussing.
It takes a few seconds to taste a bite of a dish. Let’s assume that a portion of chicken is eaten in 24 bites. If, in order for us to eat such a portion, there is a chicken that has had to suffer 9 days, and has been deprived of 2 years of life, how much harm is inflicted on the chicken for each of those 24 bites? Doing the math, the result is as follows. In exchange for each brief moment of tasting its flesh, the chicken has had to suffer on average for about 9 hours on a farm. And it has been deprived of a month’s life. That’s just for every single bite. Every second of our taste pleasure is very expensive for the animal that is eaten.
Oscar Horta, Making a Stand for Animals (2022)
This isn’t hyperbole. Here’s a description of the experiences of the chickens most people eat:
Broiler chickens are the chickens raised for meat, rather than the ones raised to lay eggs. Chickens are endure cruel transport right from the moment of birth, in small crates. They’re sent to overcrowded sheds where they have no space to turn around. They have nothing interesting to do, and not enough space to do it. They express none of their natural behaviors and are subject to constant horrifying disease, injury, and artificial lighting that leads to torturous sleep deprivation. They spend their days unable to move much, living in feces and ammonia, subject to constant violence, and with horrifying diseases. Then, they’re cruelly slaughtered, wherein they’re crammed into small crates, leading to many chickens dying, and many more enduring bone-breaking injuries and weather extremes. Then, those that haven’t died yet are brutally slaughtered, some stunned, some with their throats slit while they’re conscious, and some boiled alive while fully conscious.
The idea that the potential human health benefits of meat consumption could possibly be decisive on the question of whether it’s ethical to eat meat is a fantasy.
If we found out that torturing a baby for 9 hours created a cup’s worth of baby tears which, when drank regularly, extended the human lifespan by 20 years, we would obviously not do it.
Thankfully, we can eat chicken and cause the same harm to a being of similar intelligence, with far less positive impact on our health.
- EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by 28 Sep 2023 23:30 UTC; 319 points) (LessWrong;
- EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by 29 Sep 2023 4:04 UTC; 115 points) (
- 3 Jun 2023 19:52 UTC; 17 points) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (
- 4 Jun 2023 2:46 UTC; -2 points) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (
Hi Emily,
Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.
Holden has stated that “It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness.” As OP continues researching moral weights, OP’s marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?
Along with OP’s neartermist cause prioritization, your comment seems to imply that OP’s moral weights are 1-2 orders of magnitude lower than Rethink’s. If that’s true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between FAW and GHW.
Does OP plan to reveal their moral weights and/or their methodology for deriving them? It seems that opening up the conversation would be quite beneficial to OP’s objective of furthering moral weight research until uncertainty is reduced enough to act upon.
I’d like to reiterate how much I appreciate your openness to feedback and your reply’s clarification of OP’s disagreements with my post. That said, this reply doesn’t seem to directly answer this post’s headline questions:
How much weight does OP’s theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
Precisely how much more does OP value one unit of a human’s welfare than one unit of another animal’s welfare, just because the former is a human? How does OP derive this tradeoff?
How would OP’s views have to change for OP to prioritize animal welfare in neartermism?
Though you have no obligation to directly answer these questions, I really wish you would. A transparent discussion could update OP, Rethink, and many others on this deeply important topic.
Thanks again for taking the time to engage, and for everything you and OP have done to help others :)