In the long run it seems like the meat eater problem will drop off quite a bit. First because of improving welfare standards, second because of pressures to switch to more efficient plant-based calories, and third because people stop eating more meat or even eat less meat beyond a certain income. So making the US wealthier for instance is most likely good for farm animals in the long run.
For global development in the short run, we can see that $1000 in Africa cuts animal welfare by −800 (best estimate) to −4000 (high estimate) points. And I conservatively estimated that $1 to an ACE charity improves animal welfare by 10,000 points. So $1,100 donated to GiveDirectly (=~$1,000 received) should require between $0.08 and $0.40 if you want to offset to an effective animal charity. But it’s rather arbitrary depending on just how conservative you want to be. I sort of assumed that the real effectiveness of ACE charities is 5x lower than their estimate.
Note that I don’t think that offsetting as a practice actually makes sense, it doesn’t make sense under utilitarianism, it’s more of a methodological tool to put the impacts of different things in perspective with one another.
Thanks for your response, kbog!
Animal welfare issues are plausibly getting worse and not better so I’d be less confident to assume it will not be an issue in the future. As the world develops and eats more meat, Compassion in World Farming estimates that annual factory farm land animals killed could increase by 50% over the next 30 years. Assuming people’s expanding moral circle will reverse this trend is dangerous when the animal welfare movement has progressed little over the past few decades (number of vegetarians in US have been flat; there are some animal welfare legislative victories but also setbacks like ag-gag rules). Innovations like clean meat could help but it is still early, and there are also ways technology can make things even worse. Assuming animal welfare issues remain as they currently are (neither deteriorating nor improving) seems to me a plausible and more responsible projection.
If so, for the Long Term Future EA Fund, let’s assume the Animal Welfare EA Fund “offset ratio” (to account for the meat eater problem) is the same for future generations as it is for the current generation. Based on your blog’s estimate of a nickel a day, it costs a person ~$1000 to offset a lifetime of meat consumption ($0.05/day x 365 days/year x 50 years). It seems your estimate is for people living in rich countries though, so maybe 30% of that or ~$300 is more applicable to the average human. This can be compared to the Long Term Future Fund’s expected cost effectiveness of saving a human life (for just the current generation). I’ve seen one estimate that assumes a reduction in x-risk of 1% for $70 billion dollars spent (again for the current generation only). This leads to ~$1000 per human life saved ($70 billion / 7 billion humans / 1%). If so, the meat eater problem offset ratio for the Long Term Future Fund is very roughly ~30% (~$300 offset per life saved / ~$1000 to save a life).
Let’s apply a similar logic to the Global Health EA Fund. Instead of ~$1000 to offset a lifetime of meat consumption, let’s assume 10% of that for someone living in extreme poverty, or ~$100. GiveWell estimates that AMF can save a life for ~$3000, leading to an offset ratio of ~3% (~$100 offset per life saved / ~$3000 to save a life). This is two orders of magnitude larger than your comment response (of 0.008% ~ 0.04% from $0.08 ~ $0.4 / $1000). One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison). However, I’ve not had a chance to dive into your spreadsheet so I could be mis-using your results. Any corrections or reactions are much appreciated!
Finally, I’m curious as to why you think offsetting makes little sense under utilitarianism. I’m thinking it would actually be required if one were uncertain about the conversion ratio between human and animal welfare. If we were certain about the conversion, we should just do the one intervention that’s most cost effective, in whatever domain it happens to be in (human or animal). But if we were uncertain about the conversion, we will need to ensure that one domain’s actions doesn’t inadvertently produce overall negative utility when the other domain’s consequences are summed together. In the case of saving a human life, we wouldn’t want to lower overall utility because of our underestimation of the meat eater problem. On the other hand, we wouldn’t want to just focus on animal welfare if it turns out human welfare is especially significant. Offsetting cross-domain spillover effects avoids this dilemma (I teach finance, where analogies include hedging different FX risks or asset-liability matching). For the meat eater problem, it ensures saving a human life does not lead to negative utility even if we find out that animal welfare is unexpectedly important. The offset trades one animal life for another animal life, ensuring neutral utility impact within the animal domain.
Sorry for the long reply but I’ve been worrying about the meat eater problem so found your post to be especially interesting and informative. Any response you might have would be very appreciated!
One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison).
I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.
Sorry I have little time and I’m just going to respond to the logic of offsetting right now. In utilitarianism ordinarily we maximize expected utility, so there’s no need to hedge. If two actions have the same expected utility but one has a higher % chance of having a negative outcome, they’re still equally good. Companies and investors need to protect certain interests so $2 million is less than twice as good as $1 million, but in utility terms 2 million utils is exactly twice as good as 1 million utils.
Of course you could deny expected utility maximization and be morally loss averse/risk averse, and then this would be a conversation to have. There are good arguments against doing that, however, it’s a minority view