The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health

Why I Still Think AW >> GH At The Margin

Last year, I argued that Open Phil (OP) should allocate a majority of its neartermist resources to animal welfare (AW) rather than global health (GH).

Most of the critical comments still agreed that AW > GH at the margin:

  • Though Carl Shulman was unmoved by Rethink Priorities’ Moral Weights Project, he’s still “a fan of animal welfare work relative to GHW’s other grants at the margin because animal welfare work is so highly neglected”.

  • Though Hamish McDoodles thinks neuron count ratios are a better proxy for moral weight than Rethink’s method, he agrees that even if neuron counts are used, “animal charities still come out an order of magnitude ahead of human charities”.

I really appreciate OP for their engagement, which gave some helpful transparency about where they disagree. Like James Özden, I think it’s plausible that even OP’s non-animal-friendly internal estimates still imply AW > GH at the margin. (One reason to think this is that OP wrote that “our current estimates of the gap between marginal animal and human funding opportunities is…within one order of magnitude, not three”, when they could have written “GH looks better within one order of magnitude”.)

Even if that understanding is incorrect, given that OP agrees that “one order of magnitude is well within the ‘margin of error’”, I still struggle to understand the rationale behind OP funding GH 6x as much as AW. Though I appreciate OP explaining how their internal estimates differ, the details of why their estimates differ remain unknown. If GH is truly better than AW at the margin, I would like nothing more than to be persuaded of that. While I endeavor to keep an open mind, it’s difficult for me and many community members to update without knowing OP’s answers to the headline questions:

  • How much weight does OP’s theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?

  • Precisely how much more does OP value one unit of a human’s welfare than one unit of another animal’s welfare, just because the former is a human? How does OP derive this tradeoff?

  • How would OP’s views have to change for OP to prioritize animal welfare in neartermism?

OP has no obligation to answer these (or any) questions, but I continue to think that a transparent discussion about this between OP and community leaders/​members would be deeply valuable. This Debate Week, the EA Leaders Forum, 80k’s updated rankings, and the Community Survey have made it clear that there’s a large gap between the community consensus on GH/​AW allocation and OP’s. This is a question of enormous importance for millions of people and trillions of animals. Anything we can do to get this right would be incredibly valuable.

Responses to Objections Not Discussed In Last Year’s Post

Could GH > AW When Optimizing For Reliable Ripple Effects?

Richard Chappell has argued that while “animal welfare clearly wins by the lights of pure suffering reduction”, GH could be competitive with AW when optimizing for reliable ripple effects like long-term human population growth or economic growth.

AW Is Plausibly More Robustly Good Than GH’s Ripple Effects

I don’t think it’s obvious that human population growth or economic growth are robustly good. Historically, these ripple effects have had even larger effects on farmed and wild animal populations:

  • Humanity-caused climate change and land use have contributed to a loss of 69% of wildlife since 1970.

  • The number of farmed fish has increased by nearly 10x since 1990.

  • Brian Tomasik has estimated that each dollar donated to AMF prevents 10,000 invertebrate life-years by reducing invertebrate populations.

Trying to account for all of these AW effects makes me feel rather clueless about the long-term ripple effects of GH interventions. In contrast, AW interventions such as humane slaughter seem more likely to me to be robustly good. While humane slaughter may slightly reduce demand for meat due to increased meat prices, it is unlikely to affect farmed or wild animal populations nearly as much as economic growth or human population growth would.

Implications of Optimizing for Reliable Ripple Effects in GH

Vasco Grilo points out that longtermist interventions like global priorities research and improving institutional decisionmaking seem to be better for reliable long-term ripple effects than GiveWell Top Charities. It would be surprising if the results of GiveWell’s process, which optimizes for the cheapest immediate QALYs/​lives saved/​income doublings, would also have the best long-term ripple effects.

Rohin Shah suggests further implications of optimizing for reliable ripple effects:

  1. Given an inability to help everyone, you’d want to target interventions based on people’s future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)

  2. You’d either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)

  3. You’d want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.

I think it’s plausible that some AW causes, such as moral circle expansion, could also rank high on the rubric of reliable ripple effects.

In summary, it seems that people sympathetic to Richard’s argument should still be advocating for a radical rethinking of almost all large funders’ GH portfolios.

What if I’m a Longtermist?

Some of my fellow longtermists have been framing this discussion by debating which of GH or AW is the best for the long-term future. If the debate is framed this way, it collapses to comparing between longtermist interventions which could be characterized as GH and those which could be characterized as AW:

This doesn’t seem like a useful discussion if the debate participants would all privately prefer that the 100M simply be allocated to unrestricted longtermism.

Instead, I think we would all learn more from the debate if it were instead framed within the context of neartermism. Like OP and the Navigation Fund, I think there are lots of reasons to allocate some of our resources to neartermism, including worldview diversification, cluelessness, moral parliament, risk aversion, and more. If you agree, then I think it would make more sense to frame this debate within neartermism, because that’s likely what determines each of our personal splits between our GH and AW donations.