Animal welfare issues are plausibly getting worse and not better so I’d be less confident to assume it will not be an issue in the future. As the world develops and eats more meat, Compassion in World Farming estimates that annual factory farm land animals killed could increase by 50% over the next 30 years. Assuming people’s expanding moral circle will reverse this trend is dangerous when the animal welfare movement has progressed little over the past few decades (number of vegetarians in US have been flat; there are some animal welfare legislative victories but also setbacks like ag-gag rules). Innovations like clean meat could help but it is still early, and there are also ways technology can make things even worse. Assuming animal welfare issues remain as they currently are (neither deteriorating nor improving) seems to me a plausible and more responsible projection.
If so, for the Long Term Future EA Fund, let’s assume the Animal Welfare EA Fund “offset ratio” (to account for the meat eater problem) is the same for future generations as it is for the current generation. Based on your blog’s estimate of a nickel a day, it costs a person ~$1000 to offset a lifetime of meat consumption ($0.05/day x 365 days/year x 50 years). It seems your estimate is for people living in rich countries though, so maybe 30% of that or ~$300 is more applicable to the average human. This can be compared to the Long Term Future Fund’s expected cost effectiveness of saving a human life (for just the current generation). I’ve seen one estimate that assumes a reduction in x-risk of 1% for $70 billion dollars spent (again for the current generation only). This leads to ~$1000 per human life saved ($70 billion / 7 billion humans / 1%). If so, the meat eater problem offset ratio for the Long Term Future Fund is very roughly ~30% (~$300 offset per life saved / ~$1000 to save a life).
Let’s apply a similar logic to the Global Health EA Fund. Instead of ~$1000 to offset a lifetime of meat consumption, let’s assume 10% of that for someone living in extreme poverty, or ~$100. GiveWell estimates that AMF can save a life for ~$3000, leading to an offset ratio of ~3% (~$100 offset per life saved / ~$3000 to save a life). This is two orders of magnitude larger than your comment response (of 0.008% ~ 0.04% from $0.08 ~ $0.4 / $1000). One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison). However, I’ve not had a chance to dive into your spreadsheet so I could be mis-using your results. Any corrections or reactions are much appreciated!
Finally, I’m curious as to why you think offsetting makes little sense under utilitarianism. I’m thinking it would actually be required if one were uncertain about the conversion ratio between human and animal welfare. If we were certain about the conversion, we should just do the one intervention that’s most cost effective, in whatever domain it happens to be in (human or animal). But if we were uncertain about the conversion, we will need to ensure that one domain’s actions doesn’t inadvertently produce overall negative utility when the other domain’s consequences are summed together. In the case of saving a human life, we wouldn’t want to lower overall utility because of our underestimation of the meat eater problem. On the other hand, we wouldn’t want to just focus on animal welfare if it turns out human welfare is especially significant. Offsetting cross-domain spillover effects avoids this dilemma (I teach finance, where analogies include hedging different FX risks or asset-liability matching). For the meat eater problem, it ensures saving a human life does not lead to negative utility even if we find out that animal welfare is unexpectedly important. The offset trades one animal life for another animal life, ensuring neutral utility impact within the animal domain.
Sorry for the long reply but I’ve been worrying about the meat eater problem so found your post to be especially interesting and informative. Any response you might have would be very appreciated!
One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison).
I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.
Sorry I have little time and I’m just going to respond to the logic of offsetting right now. In utilitarianism ordinarily we maximize expected utility, so there’s no need to hedge. If two actions have the same expected utility but one has a higher % chance of having a negative outcome, they’re still equally good. Companies and investors need to protect certain interests so $2 million is less than twice as good as $1 million, but in utility terms 2 million utils is exactly twice as good as 1 million utils.
Of course you could deny expected utility maximization and be morally loss averse/risk averse, and then this would be a conversation to have. There are good arguments against doing that, however, it’s a minority view
Thanks for your response, kbog!
Animal welfare issues are plausibly getting worse and not better so I’d be less confident to assume it will not be an issue in the future. As the world develops and eats more meat, Compassion in World Farming estimates that annual factory farm land animals killed could increase by 50% over the next 30 years. Assuming people’s expanding moral circle will reverse this trend is dangerous when the animal welfare movement has progressed little over the past few decades (number of vegetarians in US have been flat; there are some animal welfare legislative victories but also setbacks like ag-gag rules). Innovations like clean meat could help but it is still early, and there are also ways technology can make things even worse. Assuming animal welfare issues remain as they currently are (neither deteriorating nor improving) seems to me a plausible and more responsible projection.
If so, for the Long Term Future EA Fund, let’s assume the Animal Welfare EA Fund “offset ratio” (to account for the meat eater problem) is the same for future generations as it is for the current generation. Based on your blog’s estimate of a nickel a day, it costs a person ~$1000 to offset a lifetime of meat consumption ($0.05/day x 365 days/year x 50 years). It seems your estimate is for people living in rich countries though, so maybe 30% of that or ~$300 is more applicable to the average human. This can be compared to the Long Term Future Fund’s expected cost effectiveness of saving a human life (for just the current generation). I’ve seen one estimate that assumes a reduction in x-risk of 1% for $70 billion dollars spent (again for the current generation only). This leads to ~$1000 per human life saved ($70 billion / 7 billion humans / 1%). If so, the meat eater problem offset ratio for the Long Term Future Fund is very roughly ~30% (~$300 offset per life saved / ~$1000 to save a life).
Let’s apply a similar logic to the Global Health EA Fund. Instead of ~$1000 to offset a lifetime of meat consumption, let’s assume 10% of that for someone living in extreme poverty, or ~$100. GiveWell estimates that AMF can save a life for ~$3000, leading to an offset ratio of ~3% (~$100 offset per life saved / ~$3000 to save a life). This is two orders of magnitude larger than your comment response (of 0.008% ~ 0.04% from $0.08 ~ $0.4 / $1000). One reason might be because you’re only accounting for one year of the meat eater problem when I’ve accounted for a lifetime’s worth of impact (which I believe is the more complete counterfactual comparison). However, I’ve not had a chance to dive into your spreadsheet so I could be mis-using your results. Any corrections or reactions are much appreciated!
Finally, I’m curious as to why you think offsetting makes little sense under utilitarianism. I’m thinking it would actually be required if one were uncertain about the conversion ratio between human and animal welfare. If we were certain about the conversion, we should just do the one intervention that’s most cost effective, in whatever domain it happens to be in (human or animal). But if we were uncertain about the conversion, we will need to ensure that one domain’s actions doesn’t inadvertently produce overall negative utility when the other domain’s consequences are summed together. In the case of saving a human life, we wouldn’t want to lower overall utility because of our underestimation of the meat eater problem. On the other hand, we wouldn’t want to just focus on animal welfare if it turns out human welfare is especially significant. Offsetting cross-domain spillover effects avoids this dilemma (I teach finance, where analogies include hedging different FX risks or asset-liability matching). For the meat eater problem, it ensures saving a human life does not lead to negative utility even if we find out that animal welfare is unexpectedly important. The offset trades one animal life for another animal life, ensuring neutral utility impact within the animal domain.
Sorry for the long reply but I’ve been worrying about the meat eater problem so found your post to be especially interesting and informative. Any response you might have would be very appreciated!
I did that because I was only looking at one year of welfare improvement. One year for one year is simpler and more robust than comparing lifetimes. If you want to look at lifetimes, you have to scale up the welfare impacts as well.
Sorry I have little time and I’m just going to respond to the logic of offsetting right now. In utilitarianism ordinarily we maximize expected utility, so there’s no need to hedge. If two actions have the same expected utility but one has a higher % chance of having a negative outcome, they’re still equally good. Companies and investors need to protect certain interests so $2 million is less than twice as good as $1 million, but in utility terms 2 million utils is exactly twice as good as 1 million utils.
Of course you could deny expected utility maximization and be morally loss averse/risk averse, and then this would be a conversation to have. There are good arguments against doing that, however, it’s a minority view