Most debate week responses so far seem to strongly favor animal welfare, on the grounds that it is (likely) vastly more cost-effective in terms of pure suffering-reduction. I see two main ways to resist prioritizing suffering-reduction:
(1) Nietzschean Perfectionism: maybe the best things in life—objective goods that only psychologically complex “persons” get to experience—are just more important than creature comforts (even to the point of discounting the significance of agony?). The agony-discounting implication seems implausibly extreme, but I’d give the view a minority seat at the table in my “moral parliament”.[1] Not enough to carry the day.
(2) Strong longtermism: since almost all expected value lies in the far future, a reasonable heuristic for maximizing EV (note: not the same thing as an account of one’s fundamental moral concerns) is to not count near-term benefits at all, and instead prioritize those actions that appear the most promising for creating sustained “flow-through” or “ripple” effects that will continue to “pay it forward”, so to speak.
Assessing potential ripple effects
Global health seems much more promising than animal welfare in this respect.[2] If you help an animal (especially if the help in question is preventing their existence), they aren’t going to pay it forward. A person might. Probably not in any especially intentional way, but I assume that an additional healthy person in a minimally functional society will have positive externalities, some of which may—like economic growth—continue to compound over time. (If there are dysfunctional societies in which this is not true, the ripple effect heuristic would no longer prioritize trying to save lives there.)
So if we ask, which cause area is such that marginal funding is most likely to positively effect the future beyond our immediate lifetimes?, the answer is surely global health over animal welfare.
But that may not be the right question to ask. We might instead ask which has the highest expected value, which is not necessarily the same as the highest likelihood of value, once high-impact long-shots are taken into account.
Animal welfare efforts (esp. potentially transformative ones like lab-grown meat) may turn out to have beneficial second-order effects via their effect on human conscience—reducing the risk of future dystopias in which humanity continues to cause suffering at industrial scale. I view this as unlikely: I assume that marginal animal welfare funding mostly just serves to accelerate developments that will otherwise happen a bit later. But the timing could conceivably matter to the long-term if transformative AI is both (i) near, and (ii) results in value lock-in. Given the stakes, even a low credence in this conjunction shouldn’t be dismissed.
For global health & development funding to have a similar chance of transformative effect, I suspect it would need to be combined with talent scouting to boost the chances that a “missing genius” can reach their full potential.
That said, there’s a reasonable viewpoint on which we should not bother aiming at super-speculative transformative effects because we’re just too clueless to assess such matters with any (even wildly approximate) accuracy. On that view, longtermists should stick to more robustly reliable “ripple effects”, as we plausibly get from broadly helping people in ordinary (capacity-building) ways.
Three “worldview” perspectives worth considering
I’ve previously suggested that there are three EA “worldviews” (or broad strategic visions) worth taking into account:
(1) Pure suffering reduction.
(2) Reliable global capacity growth (i.e., long-term ripple effects).
(3) High-impact long-shots.
Animal welfare clearly wins by the lights of pure suffering reduction. Global Health clearly wins by the lights of reliable global capacity growth. The most promising high-impact long-shots are probably going to be explicitly longtermist or x-risk projects, but between animal welfare and ordinary global health charities, I think there’s actually a reasonable case for judging animal welfare to have the greater potential for transformative impact on current margins (as explained above).
I’m personally very open to high-impact long-shots, especially when they align well with more immediate values (like suffering reduction), so I think there’s a strong case for prioritizing transformative animal welfare causes here—just on the off chance that it indirectly improves our values during an especially transformative period of history.[3] But there’s huge uncertainty in such judgments, so I think someone could also make a reasonable case for prioritizing reliable global capacity growth, and hence global health over animal welfare.[4]
- ^
To help pump the perfectionist intuition: suppose that zillions of insects experience mild discomfort, on net, over their lifetimes. We’re given the option to blow up the world. It would seem incredible to allow any amount of mild discomfort to trump all complex goods and vindicate choosing the apocalypse here. (I’m not suggesting that we wholeheartedly endorse this intuition; but maybe we should give at least some non-trivial weight to a striving/life-affirming view that powerfully resists the void across a wider range of empirical contingencies than Benthamite utilitarianism allows.)
- ^
I owe the basic idea to Nick Beckstead’s dissertation.
- ^
But again, one could likely find even better candidate long-shots outside of both global health and animal welfare.
- ^
It might even seem a bit perverse to prioritize animal welfare due to valuing the “high impact long shots” funding bucket, if the most promising causes in that bucket lie outside of both animal welfare and global health. If we imagine the question bracketing the “long shots” bucket, and just inviting us to trade off between the first two, then I would really want to direct more funds into “reliable global capacity growth” over “pure suffering reduction”. So interpreting the question that way could also lead one to prioritize global health.
I think there is something to this. Besides economic growth, additional humans today could mean more humans (or beings descended from us, directly or artificially) across the far future, through their descendants.
I would be interested in further exploration of possible ripple effects of animal welfare work, too. For the most part, I expect far future indirect effects of animal welfare work to go through events that shape the distribution of values and attitudes of humans, our descendents and AIs. Some ideas:
Animal welfare work affects people’s values, attitudes and institutions, and engages people. There’s moral circle expansion and capacity building. The capacity here is the labour, knowledge and resources of a community of people sensitive to the welfare of nonhuman animals, and often nonhuman moral patients more generally. Animal advocacy work grows the capacity of the animal advocacy community. Effective animal advocacy (EAA) work grows the capacity of the EAA and EA communities. Perhaps the case for far future effects is weaker here than for economic growth, though.
More speculatively, the values and practices of future space colonies may disproprortionately reflect the values of early space colonizers from whom they inherit their institutions, attitudes and/or genetic dispositions (which in turn influence their attitudes). Ensuring early space colonizers are more animal-friendly, by changing their attitudes or ensuring hard of soft selection in a way related to their attitudes could be very important. For example, requiring the food of early space colonizers to be plant-based will cause those with dispositions that lead them to use animals for food to self-select out of space colonization. Those dispositions would then be less common across the far future, if space colonizers have more children on average. The Earth-bound will have a maximum population size, but colonizers may not or could have a far larger one, and may grow their populations above replacement long-term for successful space exploration and colonization.
And, of course, potential AI value lock-in.
Also, note that, for however much psychological studies are worth, I think one of the more common theories in psychology right now explaining concern/lack thereof about animals is that human supremacy is an expression of social dominance theory towards animals, that is, it is just an application of general desires for hierarchy (social dominance orientation), not something really specifically targeted towards animals. Creating norms and attitudes against domination of animals will reduce this general desire to dominate, reducing one of the psychological bases for prejudice in general. Depending on how much influence you think institutions vs inherent psychological traits have on human behavior and the potential of both to be changed, this could either be pretty low flowthrough impact or very high impact.
Upvoted for sharing an interesting framing!
Although once you start accounting for ripple effects, it becomes very suspicious if someone claims that the best way to improve the future is to work on global poverty or donate to animal welfare and they aren’t proposing a specific intervention that is especially likely to ripple in a positive way.
I’d guess that basically any GHD charity that helps young people (whether saving lives from malaria or improving health and life prospects during developmentally important years) has positive ripple effects. I’ve love to see more evaluation of which are especially good prospects here, but I’m not aware of any such research upon which to base such a judgment.
For animal welfare, I highlighted lab-grown meat as having the greatest potential for transformative impact IMO—but note that I’m no expert here!
I’m not entirely convinced that either is “the best way to improve the future”, but the debate week limits us to picking between those two cause areas. Given unrestricted options, I’d probably pick different long-shots; but I still think GHD is well worth supporting from the perspective of (what I call) reliable global capacity growth, alongside things like basic research and lobbying for “progress” (pro-innovation policies and institutions).
This reminded me of this older post: https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence
I feel like while ripple effects from health/animal welfare interventions are certainly something to consider, I wouldn’t base too much of my decision on those because there are likely other more effective methods to achieve those impacts—for example, if the case for health is reducing suffering+ ripple effects in economic/technological growth, I would suspect that doing animal interventions (for suffering) and tech/growth interventions (for tech/growth) would do a better job at achieving both outcomes than making a single intervention which you hope will solve both.
I think that’s right insofar as you’re comfortable with high expected value via long-shots (that could easily fail). My sense is that the positive ripple effects from broad-based interventions like GHD are more reliable/robust than those from targeted interventions. So I think that could form the basis for principled support for GHD (and similarly broad-based interventions) as part of worldview diversification.
Are there ripple effects from GHD outside of economic growth that you are thinking about? I think my initial reaction was that there seem to be very durable, reliable ways to increase economic growth which likely are much more effective than GHD. Some of my thoughts came from this here , but also direct cash transfers or even investing in the stock market would (I think) be a more reliable way to increase economic growth than GHD.
This may be out of scope of the debate week question, but I feel like if the case for GHD is (suffering reduction + flow through effects which seem to mostly be downstream of economic growth) I think the fact that there are other reliable, durable, (probably) more cost-effective interventions to achieve economic growth means that the existence of ripple effects shouldn’t alter my decisionmaking, unless there is a unique ripple effect from GHD that other interventions would not capture.
I think a worldview diversification argument makes sense here—if having more humans is intrinsically valuable for non-hedonic reason, or we might be wrong and non-human animals aren’t sentient, or if there is a lot of uncertainty around either the value of economic growth or the effectiveness of other interventions on economic growth I think that a case for GHD totally makes sense. Curious if you had anything in mind for a ripple effect unique to GHD that couldn’t be achieved by another intervention or if you had other thoughts!
Yeah, it’s a good question. I’d like to see an in-depth investigation of possible ripple effects from GHD, since I don’t think I’m in an especially good position to evaluate that. I’m basically just working from a very broad and vague intuition that humans are the ultimate resource, and GHD preserves and improves that resource in an especially clear and direct way.
Besides economic growth, I would guess that helping to sustain the population is a distinctive all-purpose instrumental value here, that’s hard to achieve by other means.
Hi Richard,
It is unclear to me whether the ripple effects (indirect longterm effects) of life saving interventions are beneficial or harmful. From Wilde et. al (2020), whose abstract is below (emphasis mine), bednets increase fertility 1 to 3 years after their distribution, but decrease it afterwards, so population initially increases[1], but may decrease soon after the distribution.
So lifesaving interventions may decrease longterm population. Moreover, Eden and Kuruc (2024) suggest decreasing population decreases longterm income per capita, so lifesaving interventions may end decreasing the longterm size of the economy too. If so, the ripple effects would tend to be harmful.
Because bednets also decrease nearterm mortality, which is why Against Malaria Foundation is one of GiveWell’s top charities
For life-saving to reduce population, it would have to reduce total fertility by more than 1 per child saved, which is extremely implausible on its face. Your authors’ interpretation is that there is no overall effect on fertility rates: “In this case, women simply shifted the same number of births forward, leading to more births today and less in the future.” (Indeed, if you look at their data in Figure 6, there is no evidence of any reduction in total fertility, let alone a reduction as huge as would be required for your claims.) This implies increased population long term as the saved children later go on to reproduce.
Why? Each bednet costs 5 $, and Against Malararia Foundation (AMF) saves one life per 5.5 k$, so 1.1 k bednets (= 5.5*10^3/5) are distributed per life saved. I think each bednet covers 2 people (for a few years), and I assume half are girls/women, so 1.1 k girls/women (= 1.1*10^3*2*0.5) are affected per life saved. As a result, population will decrease if the number of births per girl/women covered decreases by 9.09*10^-4 (= 1/(1.1*10^3)). The number of births per women in low income countries in 2022 was 4.5, so that is a decrease of 0.0202 % (= 9.09*10^-4/4.5). Does this still seem implausible? Am I missing something?
That is one hypothesis advanced by the author, but not the only interpretation of the evidence? I think you omitted crucial context around what you quoted (emphasis mine):
My interpretation is that the author thinks the effect on total fertility is unclear. I was not clear in my past comment. However, by “lifesaving interventions may decrease longterm population”, I meant this is one possibility, not the only possibility. I agree lifesaving interventions may increase population too. One would need to track fertility for longer to figure out which is correct.
From Figure 6 below, there is a statistically significant increase in fertility in year 0 (relative to year −1), and a statistically significant decrease in year 3. Eyeballing the area under the black line, I agree it is unclear whether total fertility increased. However, it is also possible fertility would remain lower after year 3 such that total fertility decreases. Moreover, the magnitude of the decrease in fertility in year 3 is like 3 % or 4 %, which is much larger than the minimum decrease of 0.0202 % I estimated above for population decreasing. Am I missing something? Maybe the effect size is being expressed as a fraction of the standard deviation of fertility in year −1 (instead of the fertility in year −1), but I would expect the standard deviation to be at least 10 % of the mean, such that my point would hold.
Why would being under a bednet reduce fertility? Two things that could make sense:
(1) The authors’ hypothesis of a mere timing shift, as fertility temporarily increases as a result of better health, followed by a (presumably similarly temporary) compensatory reduction in the immediately subsequent years, perhaps from the new parents stabilizing on their preferred family size. As noted, this hypothesis does not imply reduced total fertility.
(2) If some families stabilize on their preferred family size by (eventually) having an extra baby in the event that a previous one dies tragically early, then fertility (total births) could be expected to drop slightly as a result of life-saving interventions, but not to the point of exceeding the number of lives saved (or reducing total population).
In the absence of a plausible explanation that should lead us to view the outcome in question as especially likely, randomly positing a systematic negative population effect seems unreasonable to me. Anything is possible, of course. But selectively raising unsupported possibilities to salience just to challenge others to rule them out is a bad way to approach longtermist analysis, in my view. (Basically, the slight risk of negative fertility effects is outweighed by the expected gain in population, but common habits of thought overweight salient “risks” in a way that makes this dialectical method especially distorting.) See also: It’s Not Wise to be Clueless.
From the abstract of David Roodman’s paper on The Impact of Life-Saving Interventions on Fertility:
So it looks like saving lives in low income countries decreases fertility, but still increases population size. Because of the decrease in fertility, it may be good to downgrade the cost-effectiveness. The above would suggest multiplying it by around 0.5 (= 1 − 0.5) to 0.7 (= 1 − 0.33).
Yeah, that’s more in line with what I would expect. (Except the first sentence may be a bit hasty. Many first-world couples delay parenting until their 30s. If a child dies, they may not be able to have another—esp. since a significant period of grieving may be necessary before they were even willing to.)
Thanks for the post, Richard.
As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades. If strong longtermism was true, we would expect some interventions to have a constant or increasing effect over time?
Economic growth and population size both seem to have persisting effects. If you limit attention to just what can be “accurately measured” (by some narrow conception that rules out the above), your final judgment will be badly distorted by measurability bias.
Could you elaborate? Are you saying that increasing the economy or population size today will make the economy and population size larger for at least centuries? Would you consider a decreasing effect size of fertility- and income-boosting interventions evidence against that?
Right, I take this to be an implication of our best economic and demographic models (respectively).
I don’t know what you mean by “a decreasing effect size of fertility- and income-boosting interventions”. Whether an intervention has a noticeable short-term effect on these targets? That would seem to address a different question.
I wouldn’t expect to be able to identify the particular ripple that occurred in any given case, if that’s what you mean. So I wouldn’t take the failure to identify a particular ripple as evidence that there are no ripple effects. If there are good reasons to reject the standard models, I’d expect that to emerge in the debates over those macro models, not through micro evidence from RCTs or the like.
Do you think we can trust the predictions of such models over more than a few decades? It looks like increasing population increases longterm income per capita, but not even this is clear (the conclusion relies on extrapolating historical trends, but it is unclear these will hold over long timeframes).
Yes. I see it is a different question. No difference between the treatment and control group after a few decades could be explained by the benefits spilling over to people outside those groups. However, an increasing population or income gap between the treatment and control group would still be evidence for increasing effects, so a decreasing population or income gap is also evidence againt increasing effects.