Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. It’s worth noting that CE does consider cross-cause effects in all the interventions we consider/recommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. It’s quite common for the impact of any given intervention to regress fairly heavily as more research/work is put into it. We have found this in CE’s, GW’s and other EAs’ research. This can be seen in even more depth in the GiveWell and EA forum writings on deworming and how to deal with speculative effects that possibly have very high upsides. For example, I would expect a five-hour CEA to be constantly off (almost always in a positive direction) compared to a 50-hour CEA. A calculation made at two different levels of rigor should not be directly compared. (This does not mean shorter-form CEAs are not worth doing, but I think we have to take their cons and likely regressions a lot more seriously than this post currently does.) This discounting should be even more heavily applied to flow-through effects, as the evidence for them is way lighter than the direct effects. We tend to use something akin to the weighted quantitative modeling used here.
Marginal funding and reliability in effects: Here’s a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAs—including the one you use from Saulius—do not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically. Another example is the funding of numerous animal interventions through corporate campaigns, which have become the “hit” of the animal movement. However, these campaigns often are seen as cost-effectiveness without clear before hand knowledge of the impact an additional dollar of funding would have accomplished. It is a bit like measuring CE’s cost effectiveness by looking at the top charity we incubated and assuming future charities will be equal to that. Variance is a real pain, and it’s not even clear if other corporate campaigns will be equally cost-effective to cage-free. On the other hand, top GW charities have this built in; they are not estimating the average EV of AMF’s top three historical campaigns, they are estimating the impact of marginal average future funding.
Variable animal effects dependent on intervention: You touch on this, but I think there is an important point missed. The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration. Same goes for any intervention that operates in contexts where there is lower meat consumption (such as in India). I think if you remodeled this for an organization like Fortify Health (Iron fortification in India), it would result in rather different outcomes.
If you combine these factors and look at a marginal dollar to FH vs a marginal dollar to THL (both of them with similarly rigorous CEAs and flow-through effects that are discounted based on certainty), I think the outcomes would be different enough to change your endline conclusion. The non-epistemic difference I have is to do with ecosystem limitations, and is more specific to CE itself vs. general EA organizations. When we launch a charity, we need 1) founders 2) ideas, and 3) funding. Each of these are fairly cause area limited (and I think limiting factors are often more important than total scale). For example, if we aimed to found 10 animal charities a year (vs 10 charities across all the cause areas we currently focus on) I do not think the weakest two would be anywhere near as impactful as the top two, and only a small minority of them would get long-term funding. In fact, with animal charities making up around a third of those we have launched, I think we already run close to those limitations. This means that even if we thought that animal charities were more impactful than human ones on average, the difference would have to be pretty large for us to think that adding a 9th or 10th animal charity into the animal ecosystem would be more impactful than adding the first or second human-focused charity. I expect a version of this consideration can apply to other actors too. In general, I believe that given the current ecosystem, more than ~three-five charities founded per year within a given area would start to result in cannibalization between charities.
Thanks again for the consideration of this; I do think people should do a lot more cross-cause thinking, and I expect there are some really neglected areas that have significant intercausal impact.
Thank you so much for taking the time explain your reasons in great detail! I broadly agree with all the points you make.
It’s worth noting that CE does consider cross-cause effects in all the interventions we consider/recommend, including possible animal effects and WAS effects.
Could you elaborate on how CE does this? Among the 9 CE’s health reports of 2023, I only found 3 instances of the word “animal”. Here (emphasis mine):
A lower birth rate is also associated with fewer CO2 emissions and a gain of welfare points due to averted consumption of animal products.
There are reasons to believe that this situation may change in the near future, as poorer countries undergo the so-called “nutrition transition” toward diets high in sugars, fat, and animal foods (Reardon et al., 2021).
Animal studies also suggest that improving oxygen access may reduce mortality rates.
Only the 1st of these refers to animal welfare, and has very little detail.
Marginal funding and reliability in effects: Here’s a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAs—including the one you use from Saulius—do not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically.
Hey, I am the author of the corporate campaigns cost-effectiveness estimate you mention. In case it’s relevant, I recently spent 3 months doing another (much more detailed) cost-effectiveness estimate of chicken welfare reforms (corporate and legislative) that I unfortunately can not make public. According to this new estimate, in 2019-2020 chicken welfare reforms affected 65 years of chicken life per dollar spent. According to the same new estimate, the cost-effectiveness in 2016-2018 was about 2.5 times higher. So while it’s true that lately campaigns were not as cost-effective as they were some years ago, I think that they are still very cost-effective. In fact, even more cost-effective than my linked report [which I used in my post] suggests because in that report I think I underestimated the cost-effectiveness. Also, because of the research of the Welfare Footprint Project, I now think that these reforms are more important to chickens than I thought previously (although I haven’t yet examined the broiler book in detail).
So cost-effectiveness used to be higher, but Saulius’ updated estimate of 65 years of chicken life per dollar is 4.33 (= 65⁄15) times as high as the one I used in my BOTEC. If the 2019-2020 average cost-effectiveness is also about 4.33 times as high as the current marginal cost-effectiveness, my BOTEC will not be too off. I did not easily find estimates for the marginal cost-effectiveness. Kieran Greig (from RP) surveyed groups working on corporate campaigns globally, and told me roughly 1 year ago that:
These campaigns have some pretty significant room for more funding. Easily in the millions of dollars per year.[1]
Are there any quantitative analyses of the marginal cost-effectiveness?
The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration.
Great point! It crossed my mind, but I ended up not including it.
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. It’s quite common for the impact of any given intervention to regress fairly heavily as more research/work is put into it.
I agree this tends to be the case, but I am not sure how much. For example, I have the impression RP’s median welfare ranges are higher than what most people expected a priori. In general, it seems hard to know how much to adjust estimates, and I guess it would be better to invest more resources (at the margin) into decreasing our incertainty.
Hey Vasco,
Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. It’s worth noting that CE does consider cross-cause effects in all the interventions we consider/recommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:
Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor a lot heavier. It’s quite common for the impact of any given intervention to regress fairly heavily as more research/work is put into it. We have found this in CE’s, GW’s and other EAs’ research. This can be seen in even more depth in the GiveWell and EA forum writings on deworming and how to deal with speculative effects that possibly have very high upsides. For example, I would expect a five-hour CEA to be constantly off (almost always in a positive direction) compared to a 50-hour CEA. A calculation made at two different levels of rigor should not be directly compared. (This does not mean shorter-form CEAs are not worth doing, but I think we have to take their cons and likely regressions a lot more seriously than this post currently does.) This discounting should be even more heavily applied to flow-through effects, as the evidence for them is way lighter than the direct effects. We tend to use something akin to the weighted quantitative modeling used here.
Marginal funding and reliability in effects: Here’s a good example of how a CEA can regress really quickly; GiveWell typically does CEAs on marginal donations made, whereas many other CEAs—including the one you use from Saulius—do not consider marginal funding. I currently think that the marginal dollar to corporate campaigns is way less impactful when compared to the average dollar of spending pre-2018. This can affect a CEA quite drastically. Another example is the funding of numerous animal interventions through corporate campaigns, which have become the “hit” of the animal movement. However, these campaigns often are seen as cost-effectiveness without clear before hand knowledge of the impact an additional dollar of funding would have accomplished. It is a bit like measuring CE’s cost effectiveness by looking at the top charity we incubated and assuming future charities will be equal to that. Variance is a real pain, and it’s not even clear if other corporate campaigns will be equally cost-effective to cage-free. On the other hand, top GW charities have this built in; they are not estimating the average EV of AMF’s top three historical campaigns, they are estimating the impact of marginal average future funding.
Variable animal effects dependent on intervention: You touch on this, but I think there is an important point missed. The effects on animals vary quite a lot, depending on the intervention. Interventions that primarily affect mortality in Africa, for instance, end up looking like how you describe. But morbidity-focused interventions, mental health focused interventions, and family planning interventions are all significantly less affected by this consideration. Same goes for any intervention that operates in contexts where there is lower meat consumption (such as in India). I think if you remodeled this for an organization like Fortify Health (Iron fortification in India), it would result in rather different outcomes.
If you combine these factors and look at a marginal dollar to FH vs a marginal dollar to THL (both of them with similarly rigorous CEAs and flow-through effects that are discounted based on certainty), I think the outcomes would be different enough to change your endline conclusion.
The non-epistemic difference I have is to do with ecosystem limitations, and is more specific to CE itself vs. general EA organizations. When we launch a charity, we need 1) founders 2) ideas, and 3) funding. Each of these are fairly cause area limited (and I think limiting factors are often more important than total scale). For example, if we aimed to found 10 animal charities a year (vs 10 charities across all the cause areas we currently focus on) I do not think the weakest two would be anywhere near as impactful as the top two, and only a small minority of them would get long-term funding. In fact, with animal charities making up around a third of those we have launched, I think we already run close to those limitations. This means that even if we thought that animal charities were more impactful than human ones on average, the difference would have to be pretty large for us to think that adding a 9th or 10th animal charity into the animal ecosystem would be more impactful than adding the first or second human-focused charity. I expect a version of this consideration can apply to other actors too. In general, I believe that given the current ecosystem, more than ~three-five charities founded per year within a given area would start to result in cannibalization between charities.
Thanks again for the consideration of this; I do think people should do a lot more cross-cause thinking, and I expect there are some really neglected areas that have significant intercausal impact.
Hi Joey,
Thank you so much for taking the time explain your reasons in great detail! I broadly agree with all the points you make.
Could you elaborate on how CE does this? Among the 9 CE’s health reports of 2023, I only found 3 instances of the word “animal”. Here (emphasis mine):
Here (emphasis mine):
Here (emphasis mine):
Only the 1st of these refers to animal welfare, and has very little detail.
Saulius commented that (emphasis mine):
So cost-effectiveness used to be higher, but Saulius’ updated estimate of 65 years of chicken life per dollar is 4.33 (= 65⁄15) times as high as the one I used in my BOTEC. If the 2019-2020 average cost-effectiveness is also about 4.33 times as high as the current marginal cost-effectiveness, my BOTEC will not be too off. I did not easily find estimates for the marginal cost-effectiveness. Kieran Greig (from RP) surveyed groups working on corporate campaigns globally, and told me roughly 1 year ago that:
Are there any quantitative analyses of the marginal cost-effectiveness?
Great point! It crossed my mind, but I ended up not including it.
I agree this tends to be the case, but I am not sure how much. For example, I have the impression RP’s median welfare ranges are higher than what most people expected a priori. In general, it seems hard to know how much to adjust estimates, and I guess it would be better to invest more resources (at the margin) into decreasing our incertainty.
Further details are confidential:
- “I apologize that I can’t share too much specifically as I promised organizations that those results would be confidential”.