Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. Itās worth noting that CE does consider cross-cause effects in all the interventions we consider/ārecommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. Itās quite common for the impact of any given intervention to regress fairly heavily as more research/āwork is put into it. We have found this in CEās, GWās and other EAsā research. This can be seen in even more depth in the GiveWell and EA forum writings on deworming and how to deal with speculative effects that possibly have very high upsides. For example, I would expect a five-hour CEA to be constantly off (almost always in a positive direction) compared to a 50-hour CEA. A calculation made at two different levels of rigor should not be directly compared. (This does not mean shorter-form CEAs are not worth doing, but I think we have to take their cons and likely regressions a lot more seriously than this post currently does.) This discounting should be even more heavily applied to flow-through effects, as the evidence for them is way lighter than the direct effects. We tend to use something akin to the weighted quantitative modeling used here.
Marginal funding and reliability in effects: Hereās a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAsāincluding the one you use from Sauliusādo not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically. Another example is the funding of numerous animal interventions through corporate campaigns, which have become the āhitā of the animal movement. However, these campaigns often are seen as cost-effectiveness without clear before hand knowledge of the impact an additional dollar of funding would have accomplished. It is a bit like measuring CEās cost effectiveness by looking at the top charity we incubated and assuming future charities will be equal to that. Variance is a real pain, and itās not even clear if other corporate campaigns will be equally cost-effective to cage-free. On the other hand, top GW charities have this built in; they are not estimating the average EV of AMFās top three historical campaigns, they are estimating the impact of marginal average future funding.
Variable animal effects dependent on intervention: You touch on this, but I think there is an important point missed. The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration. Same goes for any intervention that operates in contexts where there is lower meat consumption (such as in India). I think if you remodeled this for an organization like Fortify Health (Iron fortification in India), it would result in rather different outcomes.
If you combine these factors and look at a marginal dollar to FH vs a marginal dollar to THL (both of them with similarly rigorous CEAs and flow-through effects that are discounted based on certainty), I think the outcomes would be different enough to change your endline conclusion. The non-epistemic difference I have is to do with ecosystem limitations, and is more specific to CE itself vs. general EA organizations. When we launch a charity, we need 1) founders 2) ideas, and 3) funding. Each of these are fairly cause area limited (and I think limiting factors are often more important than total scale). For example, if we aimed to found 10 animal charities a year (vs 10 charities across all the cause areas we currently focus on) I do not think the weakest two would be anywhere near as impactful as the top two, and only a small minority of them would get long-term funding. In fact, with animal charities making up around a third of those we have launched, I think we already run close to those limitations. This means that even if we thought that animal charities were more impactful than human ones on average, the difference would have to be pretty large for us to think that adding a 9th or 10th animal charity into the animal ecosystem would be more impactful than adding the first or second human-focused charity. I expect a version of this consideration can apply to other actors too. In general, I believe that given the current ecosystem, more than ~three-five charities founded per year within a given area would start to result in cannibalization between charities.
Thanks again for the consideration of this; I do think people should do a lot more cross-cause thinking, and I expect there are some really neglected areas that have significant intercausal impact.
Thank you so much for taking the time explain your reasons in great detail! I broadly agree with all the points you make.
Itās worth noting that CE does consider cross-cause effects in all the interventions we consider/ārecommend, including possible animal effects and WAS effects.
Could you elaborate on how CE does this? Among the 9 CEās health reports of 2023, I only found 3 instances of the word āanimalā. Here (emphasis mine):
A lower birth rate is also associated with fewer CO2 emissions and a gain of welfare points due to averted consumption of animal products.
There are reasons to believe that this situation may change in the near future, as poorer countries undergo the so-called ānutrition transitionā toward diets high in sugars, fat, and animal foods (Reardon et al., 2021).
Animal studies also suggest that improving oxygen access may reduce mortality rates.
Only the 1st of these refers to animal welfare, and has very little detail.
Marginal funding and reliability in effects: Hereās a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAsāincluding the one you use from Sauliusādo not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically.
Hey, I am the author of the corporate campaigns cost-effectiveness estimate you mention. In case itās relevant, I recently spent 3 months doing another (much more detailed) cost-effectiveness estimate of chicken welfare reforms (corporate and legislative) that I unfortunately can not make public. According to this new estimate, in 2019-2020 chicken welfare reforms affected 65 years of chicken life per dollar spent. According to the same new estimate, the cost-effectiveness in 2016-2018 was about 2.5 times higher. So while itās true that lately campaigns were not as cost-effective as they were some years ago, I think that they are still very cost-effective. In fact, even more cost-effective than my linked report [which I used in my post] suggests because in that report I think I underestimated the cost-effectiveness. Also, because of the research of the Welfare Footprint Project, I now think that these reforms are more important to chickens than I thought previously (although I havenāt yet examined the broiler book in detail).
So cost-effectiveness used to be higher, but Sauliusā updated estimate of 65 years of chicken life per dollar is 4.33 (= 65ā15) times as high as the one I used in my BOTEC. If the 2019-2020 average cost-effectiveness is also about 4.33 times as high as the current marginal cost-effectiveness, my BOTEC will not be too off. I did not easily find estimates for the marginal cost-effectiveness. Kieran Greig (from RP) surveyed groups working on corporate campaigns globally, and told me roughly 1 year ago that:
These campaigns have some pretty significant room for more funding. Easily in the millions of dollars per year.[1]
Are there any quantitative analyses of the marginal cost-effectiveness?
The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration.
Great point! It crossed my mind, but I ended up not including it.
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. Itās quite common for the impact of any given intervention to regress fairly heavily as more research/āwork is put into it.
I agree this tends to be the case, but I am not sure how much. For example, I have the impression RPās median welfare ranges are higher than what most people expected a priori. In general, it seems hard to know how much to adjust estimates, and I guess it would be better to invest more resources (at the margin) into decreasing our incertainty.
Itās worth noting that CE does consider cross-cause effects in all the interventions we consider/ārecommend, including possible animal effects and WAS effects.
I have searched for āanimalā in all the 16 reports of CEās global health and development recommendations, and I did not find any discussion that extending human lives would increase the consumption of animals. In contrast, decreasing the birth rate is highlighted as being a positive externality in terms of animal welfare in 3 of the 16 reports:
āA lower birth rate is also associated with fewer CO2 emissions and a gain of welfare points due to averted consumption of animal productsā (here).
āFinally we believe this intervention could have important positive externalities on animal welfare. Increasing uptake for contraception and preventing unintended births would reduce family sizes and their overall consumption in animal products. A lifetime of consumption of these products leads to an considerable amount of suffering for animals raised in factory farms. Preventing unintended births therefore indirectly decreases demand for these products, thereby decreasing the number of animals raised for food. We have modeled these effects using CEĢs welfare points system in our CEAā (here).
āWe found that this intervention has two kinds of externalities. It positively affects climate change, with three tonnes of CO2 emissions per dollar spent. It also positively affects animal welfare, with 377 welfare points gained per dollar spentā (here).
I think CEās reports should mention the negative externalities on farmed animals due to extending human lives, considering CEās reports on family planning discuss the positive externalities on farmed animals due to decreasing fertility.
The last bullet above also illustrates your global health and development recommendations could be net harmful based on your own numbers. I think 100 welfare points (WPs) are roughly as good as averting 1 DALY (because 100 WPs is the maximum total welfare possible), so 377 WP/ā$ of positive externalities respect around 3.77 DALY/ā$. This is 379 (= 3.77/ā0.00994) times the cost-effectiveness of GiveWellās top charities of 0.00994 DALY/ā$, i.e. the effects on animals are way larger than those on humans according to CEās report. This claims the effects on animals are positive due to decreasing population size, so it directly follows that saving lives (increasing population size) has negative effects on animals, and the negative effects on animals would be much larger than those on humans (trusting the numbers of the report).
Quick response below as I am limiting my time on the EA forum nowadays. I am far less convinced that life saving interventions are net population creating than I am that family planning decreases it. Written about 10 years ago, but still one of the better pieces on this IMO is David Roodmanās report commissioned by GiveWell. In addition, our welfare points are far less certain estimates when compared to our global health estimates. This matters a lot, e.g., I would regress weaker CEAs by over 1 order of magnitude even from the same organization using similar methods, and it could be 3+ orders of magnitude across different orgs and methods. AIM in general is pretty confident e.g. that our best animal charities are not 379x better than a top GiveWell charity even if a first pass CEA might suggest that.
I think for externalities you can get yourself pretty lost down a rabbit hole based on pretty speculative assumptions if you are not careful. We try to think of it a bit like the weight quantitative modeling described here and only include effects that we think are major (e.g. 10%+ effect after uncertainty adjustments on the total impact). We also try to take into account what effects we expect founders considering these ideas would most likely consider to be decision relevant for them.
In general I think we aim to be more modest about moral estimates (particularly when they are uncertain or hotly debated) and try to recommend the peak intervention across different cause areas without making a final verdict. I also think this call in our case does not reduce our impact as there are pretty natural caps to every cause area, e.g., I do not think the animal movement could effectively absorbed 10 new charities a year anyways.
I am far less convinced that life saving interventions are net population creating than I am that family planning decreases it. Written about 10 years ago, but still one of the better pieces on this IMO is David Roodmanās report commissioned by GiveWell.
In places where lifetime births/āwoman has been converging to 2 or lower, saving one childās life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change. In the increasingly exceptional locales where couples appear not to limit fertility much, such as Niger and Mali, the impact of saving a life on total births will be smaller, and may come about mainly through the biological channel of lactational amenorrhea. Here, mortality-drop-fertility-drop ratios of 1:0.5 and 1:0.33 appear more plausible.
So it looks like saving lives in low income countries decreases fertility, but still increases population size.
I am far less convinced that life saving interventions are net population creating than I am that family planning decreases it. Written about 10 years ago, but still one of the better pieces on this IMO is David Roodmanās report commissioned by GiveWell.
Fair! From Wilde et. al (2020), whose abstract is below (emphasis mine), bednets increase fertility 1 to 3 years after their distribution, but decrease it afterwards, so population initially increases (because bednets also decrease nearterm mortality), but may decrease soon after the distribution.
We examine the extent to which recent declines in child mortality and fertility in SubSaharan Africa can be attributed to insecticide-treated bed nets (ITNs). Exploiting the rapid increase in ITNs since the mid-2000s, we employ a difference-in-differences estimation strategy to identify the causal effect of ITNs on mortality and fertility. We show that the ITN distribution campaigns reduced all-cause child mortality, but surprisingly increased total fertility rates in the short run in spite of reduced desire for children and increased contraceptive use. We explain this paradox in two ways. First, we show evidence for an unexpected increase in fecundity and sexual activity due to the better health environment after the ITN distribution. Second, we show evidence that the effect on fertility is positive only temporarily ā lasting only 1-3 years after the beginning of the ITN distribution programs ā and then becomes negative. Taken together, these results suggest the ITN distribution campaigns may have caused fertility to increase unexpectedly and temporarily, or that these increases may just be a tempo effect ā changes in fertility timing which do not lead to increased completed fertility.
I guess the above partly generalises to other interventions. If saving lives decreases population, it may well decrease welfare (if the increase in welfare per capita is not sufficiently large), thus being harmful under many moral views. Likewise for family planning interventions. CEās theories of change for the family planning interventions of the 3 reports I mentioned above have as outcome decreasing unwanted pregnancies. Are you assuming this is intrinsically valuable, or are you super confident that it leads to higher human welfare (because the increase in human welfare per capita exceeds the decrease in population)? I think the outcome should at least be increasing human welfare (and, ideally, increasing welfare accounting for both humans and animals).
I think for externalities you can get yourself pretty lost down a rabbit hole based on pretty speculative assumptions if you are not careful. We try to think of it a bit like the weight quantitative modeling described here and only include effects that we think are major (e.g. 10%+ effect after uncertainty adjustments on the total impact).
I agree, but I think it is at least worth mentioning the potential negative externalities on animals (without getting lost into rabbit holes). I also think it would be good to justify that the regression of the potential negative externalities is so large that they become negligible, especially if a direct interpretation leads one to conclude they overwhelm the direct effects (as in the report I discussed in the previous comment).
Hey Vasco,
Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. Itās worth noting that CE does consider cross-cause effects in all the interventions we consider/ārecommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. Itās quite common for the impact of any given intervention to regress fairly heavily as more research/āwork is put into it. We have found this in CEās, GWās and other EAsā research. This can be seen in even more depth in the GiveWell and EA forum writings on deworming and how to deal with speculative effects that possibly have very high upsides. For example, I would expect a five-hour CEA to be constantly off (almost always in a positive direction) compared to a 50-hour CEA. A calculation made at two different levels of rigor should not be directly compared. (This does not mean shorter-form CEAs are not worth doing, but I think we have to take their cons and likely regressions a lot more seriously than this post currently does.) This discounting should be even more heavily applied to flow-through effects, as the evidence for them is way lighter than the direct effects. We tend to use something akin to the weighted quantitative modeling used here.
Marginal funding and reliability in effects: Hereās a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAsāincluding the one you use from Sauliusādo not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically. Another example is the funding of numerous animal interventions through corporate campaigns, which have become the āhitā of the animal movement. However, these campaigns often are seen as cost-effectiveness without clear before hand knowledge of the impact an additional dollar of funding would have accomplished. It is a bit like measuring CEās cost effectiveness by looking at the top charity we incubated and assuming future charities will be equal to that. Variance is a real pain, and itās not even clear if other corporate campaigns will be equally cost-effective to cage-free. On the other hand, top GW charities have this built in; they are not estimating the average EV of AMFās top three historical campaigns, they are estimating the impact of marginal average future funding.
Variable animal effects dependent on intervention: You touch on this, but I think there is an important point missed. The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration. Same goes for any intervention that operates in contexts where there is lower meat consumption (such as in India). I think if you remodeled this for an organization like Fortify Health (Iron fortification in India), it would result in rather different outcomes.
If you combine these factors and look at a marginal dollar to FH vs a marginal dollar to THL (both of them with similarly rigorous CEAs and flow-through effects that are discounted based on certainty), I think the outcomes would be different enough to change your endline conclusion.
The non-epistemic difference I have is to do with ecosystem limitations, and is more specific to CE itself vs. general EA organizations. When we launch a charity, we need 1) founders 2) ideas, and 3) funding. Each of these are fairly cause area limited (and I think limiting factors are often more important than total scale). For example, if we aimed to found 10 animal charities a year (vs 10 charities across all the cause areas we currently focus on) I do not think the weakest two would be anywhere near as impactful as the top two, and only a small minority of them would get long-term funding. In fact, with animal charities making up around a third of those we have launched, I think we already run close to those limitations. This means that even if we thought that animal charities were more impactful than human ones on average, the difference would have to be pretty large for us to think that adding a 9th or 10th animal charity into the animal ecosystem would be more impactful than adding the first or second human-focused charity. I expect a version of this consideration can apply to other actors too. In general, I believe that given the current ecosystem, more than ~three-five charities founded per year within a given area would start to result in cannibalization between charities.
Thanks again for the consideration of this; I do think people should do a lot more cross-cause thinking, and I expect there are some really neglected areas that have significant intercausal impact.
Hi Joey,
Thank you so much for taking the time explain your reasons in great detail! I broadly agree with all the points you make.
Could you elaborate on how CE does this? Among the 9 CEās health reports of 2023, I only found 3 instances of the word āanimalā. Here (emphasis mine):
Here (emphasis mine):
Here (emphasis mine):
Only the 1st of these refers to animal welfare, and has very little detail.
Saulius commented that (emphasis mine):
So cost-effectiveness used to be higher, but Sauliusā updated estimate of 65 years of chicken life per dollar is 4.33 (= 65ā15) times as high as the one I used in my BOTEC. If the 2019-2020 average cost-effectiveness is also about 4.33 times as high as the current marginal cost-effectiveness, my BOTEC will not be too off. I did not easily find estimates for the marginal cost-effectiveness. Kieran Greig (from RP) surveyed groups working on corporate campaigns globally, and told me roughly 1 year ago that:
Are there any quantitative analyses of the marginal cost-effectiveness?
Great point! It crossed my mind, but I ended up not including it.
I agree this tends to be the case, but I am not sure how much. For example, I have the impression RPās median welfare ranges are higher than what most people expected a priori. In general, it seems hard to know how much to adjust estimates, and I guess it would be better to invest more resources (at the margin) into decreasing our incertainty.
Further details are confidential:
- āI apologize that I canāt share too much specifically as I promised organizations that those results would be confidentialā.
Hi Joey,
I have searched for āanimalā in all the 16 reports of CEās global health and development recommendations, and I did not find any discussion that extending human lives would increase the consumption of animals. In contrast, decreasing the birth rate is highlighted as being a positive externality in terms of animal welfare in 3 of the 16 reports:
āA lower birth rate is also associated with fewer CO2 emissions and a gain of welfare
points due to averted consumption of animal productsā (here).
āFinally we believe this intervention could have important positive externalities on animal welfare. Increasing uptake for contraception and preventing unintended births would reduce family sizes and their overall consumption in animal products.
A lifetime of consumption of these products leads to an considerable amount of suffering for animals raised in factory farms. Preventing unintended births therefore indirectly decreases demand for these products, thereby decreasing the number of animals raised for food. We have modeled these effects using CEĢs welfare points system in our CEAā (here).
āWe found that this intervention has two kinds of externalities. It positively affects climate change, with three tonnes of CO2 emissions per dollar spent. It also positively affects animal welfare, with 377 welfare points gained per dollar spentā (here).
I think CEās reports should mention the negative externalities on farmed animals due to extending human lives, considering CEās reports on family planning discuss the positive externalities on farmed animals due to decreasing fertility.
The last bullet above also illustrates your global health and development recommendations could be net harmful based on your own numbers. I think 100 welfare points (WPs) are roughly as good as averting 1 DALY (because 100 WPs is the maximum total welfare possible), so 377 WP/ā$ of positive externalities respect around 3.77 DALY/ā$. This is 379 (= 3.77/ā0.00994) times the cost-effectiveness of GiveWellās top charities of 0.00994 DALY/ā$, i.e. the effects on animals are way larger than those on humans according to CEās report. This claims the effects on animals are positive due to decreasing population size, so it directly follows that saving lives (increasing population size) has negative effects on animals, and the negative effects on animals would be much larger than those on humans (trusting the numbers of the report).
Hey Vasco,
Quick response below as I am limiting my time on the EA forum nowadays. I am far less convinced that life saving interventions are net population creating than I am that family planning decreases it. Written about 10 years ago, but still one of the better pieces on this IMO is David Roodmanās report commissioned by GiveWell.
In addition, our welfare points are far less certain estimates when compared to our global health estimates. This matters a lot, e.g., I would regress weaker CEAs by over 1 order of magnitude even from the same organization using similar methods, and it could be 3+ orders of magnitude across different orgs and methods. AIM in general is pretty confident e.g. that our best animal charities are not 379x better than a top GiveWell charity even if a first pass CEA might suggest that.
I think for externalities you can get yourself pretty lost down a rabbit hole based on pretty speculative assumptions if you are not careful. We try to think of it a bit like the weight quantitative modeling described here and only include effects that we think are major (e.g. 10%+ effect after uncertainty adjustments on the total impact). We also try to take into account what effects we expect founders considering these ideas would most likely consider to be decision relevant for them.
In general I think we aim to be more modest about moral estimates (particularly when they are uncertain or hotly debated) and try to recommend the peak intervention across different cause areas without making a final verdict. I also think this call in our case does not reduce our impact as there are pretty natural caps to every cause area, e.g., I do not think the animal movement could effectively absorbed 10 new charities a year anyways.
I hope this is helpful!
Best,
Joey
From the abstract of David Roodmanās paper on The Impact of Life-Saving Interventions on Fertility (written in 2014):
So it looks like saving lives in low income countries decreases fertility, but still increases population size.
Thanks, Joey!
Fair! From Wilde et. al (2020), whose abstract is below (emphasis mine), bednets increase fertility 1 to 3 years after their distribution, but decrease it afterwards, so population initially increases (because bednets also decrease nearterm mortality), but may decrease soon after the distribution.
I guess the above partly generalises to other interventions. If saving lives decreases population, it may well decrease welfare (if the increase in welfare per capita is not sufficiently large), thus being harmful under many moral views. Likewise for family planning interventions. CEās theories of change for the family planning interventions of the 3 reports I mentioned above have as outcome decreasing unwanted pregnancies. Are you assuming this is intrinsically valuable, or are you super confident that it leads to higher human welfare (because the increase in human welfare per capita exceeds the decrease in population)? I think the outcome should at least be increasing human welfare (and, ideally, increasing welfare accounting for both humans and animals).
I agree, but I think it is at least worth mentioning the potential negative externalities on animals (without getting lost into rabbit holes). I also think it would be good to justify that the regression of the potential negative externalities is so large that they become negligible, especially if a direct interpretation leads one to conclude they overwhelm the direct effects (as in the report I discussed in the previous comment).