Just wanted to copy MacAskill’s comment here so people don’t have to click through:
Though I was deeply troubled by the poor meater problem for some time, I’ve come to the conclusion that it isn’t that bad (for utilitarians—I think it’s much worse for non-consequentialists, though I’m not sure).
The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I’ve saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I’ve saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn’t compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).
So let’s say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe −10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it’s still a good thing to save someone’s life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).
This is informative, I strongly upvoted. A few comments though:
I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.
I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.
You may be interested to read some of MacAskill’s older writing on the subject https://​​www.lesswrong.com/​​posts/​​FCiMtrsM8mcmBtfTR/​​?commentId=9abk4EJXMtj72pcQu
Just wanted to copy MacAskill’s comment here so people don’t have to click through:
Thanks MHR!
This is informative, I strongly upvoted. A few comments though:
I find it ok to entertain the idea of what is the expected value of doing X or Y as a function of their consequences, be it longtermism or animal welfare.
I would find it very morally unappealing to refuse to save lives on the grounds of convicting people of actions they have not committed yet. Eg, if a child is drowning before you, I think it would be wrong in my opinion to let her drown because he might cause animal suffering. A person can make decisions and I would find it wrong to let her die because of what her statistical group does.