I’m tempted to point out that increasing the population may not increase meat consumption much via effects on prices, that meat consumption among the extreme poor is much lower than on average, that factory farm consumption is likely much smaller for beneficiaries in remote areas not reached by large scale animal agriculture. But that would be intellectually dishonest because none of those are things I strongly believe nor are they my actual reasons to disagree.
My actual reason to disagree is that I place much lower weight on animals than you, and I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative. I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views. I suspect that if GiveWell were to publish a transparent argument as to why they ignore those effects, it would look similar to my argument—short and unsatisfactory to you. (Note; I work at GiveWell but this is my own view.)
AIM is a more interesting case to consider because they are clearly more cause-agnostic than GiveWell and so can’t (or wouldn’t want to) make the same claim. However, that makes for a very simple hedging/offsetting defense. Given uncertainty about moral weights and risk aversion over the amount of value created, AIM should optimally fund both GHD and AW work.
My actual reason to disagree is that I place much lower weight on animals than you, and I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative. I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views. I suspect that if GiveWell were to publish a transparent argument as to why they ignore those effects, it would look similar to my argument—short and unsatisfactory to you. (Note; I work at GiveWell but this is my own view.)
I upvoted this comment for honesty, but this passage reads to me like committing to a conclusion (“saving kids from dying cannot be net negative”) and then working its way backward to reject the premise (“animals matter morally”, “saving kids from dying causes more (animal) suffering than it creates (human) welfare”) that leads to a contradictory conclusion. That seems like textbook motivated reasoning to me? It doesn’t seem like a good way of doing moral reasoning. I think it would be better to either reject the premise or to argue that the desired conclusion can follow from the premise after all.
Personally I think it’s very much not obvious whether the meat eating problem is genuine. But given that the goodness of a very large part of the EA project so far hinges on it not being real, and given that it’s far from obvious whether it’s real, I think it would be useful to make progress on that question. So I’m glad that @Vasco Grilo🔸 and others are trying to make progress on it and a little discouraged to see some pushback (from several commenters) that doesn’t really engage with Vasco’s arguments/calculations.
(It does seem like, as @Ben Millwood🔸has commented, any harm caused to animals by donating to global health charities is much smaller than the harm of not giving to animal charities. So maybe a better and more palatable framing for the meat eating problem is not, “Is giving to global health charities net negative/positive?” but “Is giving to global health charities more/less cost-effective than giving to animal charities?”)
I don’t really route my moral reasoning through EA principles (impartiality and welfarism) and I don’t claim it is great. I own up to my moral commitments, which are undeniably based on my life experiences. I am Indian. I’m not going to be convinced that the world would be better if children around me were dead. I’m just not! If that’s motivated reasoning, then so be it.
The purpose of my comment was to engage with Vasco’s argument in the way that is most resonant with me, and I suspect with other people who prioritize GHD. You’re saying it’s discouraging that people aren’t engaging with the argument analytically. I’m saying that analytical engagement is not the only legitimate kind of engagement.
In fact, I think that when analytical disagreement is the only permitted form of disagreement, that encourages much more motivated reasoning and frustrating argumentation. Imagine I had instead made a comment questioning whether GiveWell beneficiaries are really eating factory farmed meat, and Vasco then did a bunch of careful work to estimate how much that was a concern. I would be wasting their time by making an argument that doesn’t correspond to my actual beliefs. Is that a better discursive norm?
Thanks. I take you to say roughly that you have certain core beliefs that you’re unwilling to compromise on, even if you can’t justify those beliefs philosophically. And also that you think it’s better to be upfront about that than invent justifications that aren’t really load-bearing for you. (Let me know if that’s a misrepresentation.)
I think it’s virtuous that you’re honest about why you disagree (“I place much lower weight on animals”) and I think that’s valuable for discourse in that it shows where the disagreement lies. I don’t have any objection to that. But I also think that saying you just believe that and can’t/won’t justify it (“I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views”) is not particularly valuable for discourse. It doesn’t create any opening for productive engagement or movement toward consensus. I don’t think it’s harmful exactly, I just think more openness to examining whether the intuition withstands scrutiny would be more valuable.
(That is a question about discourse. I think there’s also a separate question about the soundness of the decision procedure you described in your original comment. I think it’s unsound, and therefore instrumentally irrational, but I’m not the rationality police so I won’t get into that.)
Thanks for the transparency, Karthik! I wish more people simply admitted they are not aiming to be impartial whenever they deep down think that is the case.
I endorse moral reasoning where you start from a conclusion, and then work backwards to discover general principals.
I think this community is much more at risk of being led astray by convincing-sounding but actually incorrect arguments, as opposed to having starting assumptions that vastly limit their ability to do good (I will probably give the opposite advice to most other people).
It does seem like, as @Ben Millwood🔸has commented, any harm caused to animals by donating to global health charities is much smaller than the harm of not giving to animal charities. So maybe a better and more palatable framing for the meat eating problem is not, “Is giving to global health charities net negative/positive?” but “Is giving to global health charities more/less cost-effective than giving to animal charities?”
Here is Ben’s comment (the link above is broken). I also like the prioritisation framing, and commented in the same post that the meat eating problem is mostly a distraction in that sense. However, it still seems worth analysing it to arrive to more accurate beliefs about the world, and because, in some hard to specify way, many value decreasing the probability of causing harm more than prioritising the most cost-effective interventions.
Your comment inspired me to write my own quick take, which is here. Quoting the first paragraph as a preview:
I feel pretty disappointed by some of the comments (e.g. this one) on Vasco Grilo’s recent post arguing that some of GiveWell’s grants are net harmful because of the meat eating problem. Reflecting on that disappointment, I want to articulate a moral principle I hold, which I’ll call non-dogmatism. Non-dogmatism is essentially a weak form of scope sensitivity.
I decided to spin off a quick take rather than replying here, because I think it would be interesting to have a discussion about non-dogmatism in a context that’s somewhat separated from this particular context, but I wanted to mention the quick take as a reply to your comment, since it’s relevant.
I’m tempted to point out that increasing the population may not increase meat consumption much via effects on prices
The prices would have to increase by an unreasoble amount for this to change my conclusions. For a random person globally, and in China, India and Nigeria in 2022 to cause as much suffering to poultry birds and farmed aquatic animals as the person’s happiness, the animal suffering would have to be 6.45 % (= 1⁄15.5), 2.89 % (= 1⁄34.6), 19.3 % (= 1⁄5.17) and 43.3 % (= 1⁄2.31) of the one I calculated. In addition, I have assumed no growth in the consumption of animals per capita, whereas I expect this to increase as real GDP per capita increases.
meat consumption among the extreme poor is much lower than on average
I acknowledged the people helped by GiveWell and AIM would cause less harm to animals than random people, but I do think this resolves the meat-eater problem.
The harms would be smaller for a random person helped by such GiveWell’s grants or Ambitious Impact’s organisations. I assume they have an income below that of a random person in the respective country, the supply per capita of meat excluding aquatic animals roughly increases with the logarithm of the real GDP per capita, and I guess so do the number of poultry birds, farmed aquatic animals excluding shrimp, and shrimp per capita. Yet, self-reported life satisfaction also roughly increases with the logarithm of the real GDP per capita. So I believe the harms to farmed animals per person increase roughly linearly with self-reported life satisfaction, at least across countries. As a result, it is unclear to me whether the harms to farmed animals as a fraction of the human benefits would be higher or lower for a random person than for a random person helped by such GiveWell’s grants or Ambitious Impact’s organisations.
factory farm consumption is likely much smaller for beneficiaries in remote areas not reached by large scale animal agriculture
Great point. This is the kind of consideration supporters of extending human lives would ideally investigate. Note it is also harder for life-saving interventions to reach remote areas.
My actual reason to disagree is that I place much lower weight on animals than you, and I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative.
Nitpick. I think valuing animal welfare as highly as I do implies saving lives in many countries is harmful nearterm, but the overall effect may well be beneficial or harmful (not just harmful).
AIM is a more interesting case to consider because they are clearly more cause-agnostic than GiveWell and so can’t (or wouldn’t want to) make the same claim. However, that makes for a very simple hedging/offsetting defense. Given uncertainty about moral weights and risk aversion over the amount of value created, AIM should optimally fund both GHD and AW work.
I do not see how AIM being beneficial overall justifies them starting organisations which may well be causing lots of harm nearterm.
It wasn’t my intention to throw out random objections to make you respond to them. I don’t take seriously any of the claims I offered in the first paragraph.
I think valuing animal welfare as highly as I do implies saving lives in many countries is harmful nearterm, but the overall effect may well be beneficial or harmful (not just harmful).
I would axiomatically reject the former position in addition to the latter, so this distinction doesn’t matter to me.
I do not see how AIM being beneficial overall justifies them starting organisations which may well be causing lots of harm nearterm.
In the worlds where animals have low moral weight, their GHD work is very positive. In the world where animals have high moral weight, their AW work is very positive. The portfolio approach is a way to maximize expected utility under risk aversion. This point is made here and I elaborate more in replies.
In the worlds where animals have low moral weight, their GHD work is very positive. In the world where animals have high moral weight, their AW work is very positive. The portfolio approach is a way to maximize expected utility under risk aversion. This point is made here and I elaborate more in replies.
The portfolio approach should be considered across the whole world, not AIM. There are already lots of efforts to help humans, so I have a hard time seeing how the optimal global portfolio involves AIM incubating many organisations which help humans, but may easily be causing lots of harm nearterm.
I assume your argument also depends on the type of risk aversion. I think improving the conditions of farmed animals has a much lower chance of being harmful than saving human lives.
I reject risk aversion with respect to impartial welfare (although it makes all sense to be risk averse with respect to money), as I do not see why the value of additional welfare would decrease with welfare.
It is admirable that you acknowledge that you are not using “reason and evidence to do the most good” and that you presumably accept that you have no leg to stand on when trying to persuade nativists who assign zero weight to people who live in other countries to give more to those who live abroad.
If you don’t aim to persuade anyone else to agree with your moral framework and take action along with you, you’re not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don’t care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
Karthik could also believe that any attempt to persuade someone to do what Karthik believes is best, would backfire, or that it is intrinsically wrong to persuade another person to do what Karthik believes is good, if they do not already believe the thing is good anyway. Though I agree with the general thrust of your comment.
I’m not sure what you’re looking for. I’ve made it clear that I’m not here to persuade you of my position, and I’m not going to be philosophically strongarmed into doing so. I was just trying to elaborate on a view that I suspect (and upvotes suggest) is common to other people who are not persuaded by Vasco’s argument.
I’m tempted to point out that increasing the population may not increase meat consumption much via effects on prices, that meat consumption among the extreme poor is much lower than on average, that factory farm consumption is likely much smaller for beneficiaries in remote areas not reached by large scale animal agriculture. But that would be intellectually dishonest because none of those are things I strongly believe nor are they my actual reasons to disagree.
My actual reason to disagree is that I place much lower weight on animals than you, and I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative. I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views. I suspect that if GiveWell were to publish a transparent argument as to why they ignore those effects, it would look similar to my argument—short and unsatisfactory to you. (Note; I work at GiveWell but this is my own view.)
AIM is a more interesting case to consider because they are clearly more cause-agnostic than GiveWell and so can’t (or wouldn’t want to) make the same claim. However, that makes for a very simple hedging/offsetting defense. Given uncertainty about moral weights and risk aversion over the amount of value created, AIM should optimally fund both GHD and AW work.
I upvoted this comment for honesty, but this passage reads to me like committing to a conclusion (“saving kids from dying cannot be net negative”) and then working its way backward to reject the premise (“animals matter morally”, “saving kids from dying causes more (animal) suffering than it creates (human) welfare”) that leads to a contradictory conclusion. That seems like textbook motivated reasoning to me? It doesn’t seem like a good way of doing moral reasoning. I think it would be better to either reject the premise or to argue that the desired conclusion can follow from the premise after all.
Personally I think it’s very much not obvious whether the meat eating problem is genuine. But given that the goodness of a very large part of the EA project so far hinges on it not being real, and given that it’s far from obvious whether it’s real, I think it would be useful to make progress on that question. So I’m glad that @Vasco Grilo🔸 and others are trying to make progress on it and a little discouraged to see some pushback (from several commenters) that doesn’t really engage with Vasco’s arguments/calculations.
(It does seem like, as @Ben Millwood🔸 has commented, any harm caused to animals by donating to global health charities is much smaller than the harm of not giving to animal charities. So maybe a better and more palatable framing for the meat eating problem is not, “Is giving to global health charities net negative/positive?” but “Is giving to global health charities more/less cost-effective than giving to animal charities?”)
I don’t really route my moral reasoning through EA principles (impartiality and welfarism) and I don’t claim it is great. I own up to my moral commitments, which are undeniably based on my life experiences. I am Indian. I’m not going to be convinced that the world would be better if children around me were dead. I’m just not! If that’s motivated reasoning, then so be it.
The purpose of my comment was to engage with Vasco’s argument in the way that is most resonant with me, and I suspect with other people who prioritize GHD. You’re saying it’s discouraging that people aren’t engaging with the argument analytically. I’m saying that analytical engagement is not the only legitimate kind of engagement.
In fact, I think that when analytical disagreement is the only permitted form of disagreement, that encourages much more motivated reasoning and frustrating argumentation. Imagine I had instead made a comment questioning whether GiveWell beneficiaries are really eating factory farmed meat, and Vasco then did a bunch of careful work to estimate how much that was a concern. I would be wasting their time by making an argument that doesn’t correspond to my actual beliefs. Is that a better discursive norm?
Thanks. I take you to say roughly that you have certain core beliefs that you’re unwilling to compromise on, even if you can’t justify those beliefs philosophically. And also that you think it’s better to be upfront about that than invent justifications that aren’t really load-bearing for you. (Let me know if that’s a misrepresentation.)
I think it’s virtuous that you’re honest about why you disagree (“I place much lower weight on animals”) and I think that’s valuable for discourse in that it shows where the disagreement lies. I don’t have any objection to that. But I also think that saying you just believe that and can’t/won’t justify it (“I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views”) is not particularly valuable for discourse. It doesn’t create any opening for productive engagement or movement toward consensus. I don’t think it’s harmful exactly, I just think more openness to examining whether the intuition withstands scrutiny would be more valuable.
(That is a question about discourse. I think there’s also a separate question about the soundness of the decision procedure you described in your original comment. I think it’s unsound, and therefore instrumentally irrational, but I’m not the rationality police so I won’t get into that.)
Thanks for the transparency, Karthik! I wish more people simply admitted they are not aiming to be impartial whenever they deep down think that is the case.
I think this is an alternative way of rejecting the conclusions without dropping impartiality.
I endorse moral reasoning where you start from a conclusion, and then work backwards to discover general principals.
I think this community is much more at risk of being led astray by convincing-sounding but actually incorrect arguments, as opposed to having starting assumptions that vastly limit their ability to do good (I will probably give the opposite advice to most other people).
See e.g., Epistemic learned helplessness, Memetic immune system.
Thanks, Erich.
Here is Ben’s comment (the link above is broken). I also like the prioritisation framing, and commented in the same post that the meat eating problem is mostly a distraction in that sense. However, it still seems worth analysing it to arrive to more accurate beliefs about the world, and because, in some hard to specify way, many value decreasing the probability of causing harm more than prioritising the most cost-effective interventions.
Thanks, I fixed the link. And the rest of your comment seems right to me.
Hi Karthik,
Your comment inspired me to write my own quick take, which is here. Quoting the first paragraph as a preview:
I decided to spin off a quick take rather than replying here, because I think it would be interesting to have a discussion about non-dogmatism in a context that’s somewhat separated from this particular context, but I wanted to mention the quick take as a reply to your comment, since it’s relevant.
Thanks, Karthik.
The prices would have to increase by an unreasoble amount for this to change my conclusions. For a random person globally, and in China, India and Nigeria in 2022 to cause as much suffering to poultry birds and farmed aquatic animals as the person’s happiness, the animal suffering would have to be 6.45 % (= 1⁄15.5), 2.89 % (= 1⁄34.6), 19.3 % (= 1⁄5.17) and 43.3 % (= 1⁄2.31) of the one I calculated. In addition, I have assumed no growth in the consumption of animals per capita, whereas I expect this to increase as real GDP per capita increases.
I acknowledged the people helped by GiveWell and AIM would cause less harm to animals than random people, but I do think this resolves the meat-eater problem.
Great point. This is the kind of consideration supporters of extending human lives would ideally investigate. Note it is also harder for life-saving interventions to reach remote areas.
Nitpick. I think valuing animal welfare as highly as I do implies saving lives in many countries is harmful nearterm, but the overall effect may well be beneficial or harmful (not just harmful).
I do not see how AIM being beneficial overall justifies them starting organisations which may well be causing lots of harm nearterm.
It wasn’t my intention to throw out random objections to make you respond to them. I don’t take seriously any of the claims I offered in the first paragraph.
I would axiomatically reject the former position in addition to the latter, so this distinction doesn’t matter to me.
In the worlds where animals have low moral weight, their GHD work is very positive. In the world where animals have high moral weight, their AW work is very positive. The portfolio approach is a way to maximize expected utility under risk aversion. This point is made here and I elaborate more in replies.
The portfolio approach should be considered across the whole world, not AIM. There are already lots of efforts to help humans, so I have a hard time seeing how the optimal global portfolio involves AIM incubating many organisations which help humans, but may easily be causing lots of harm nearterm.
I assume your argument also depends on the type of risk aversion. I think improving the conditions of farmed animals has a much lower chance of being harmful than saving human lives.
I reject risk aversion with respect to impartial welfare (although it makes all sense to be risk averse with respect to money), as I do not see why the value of additional welfare would decrease with welfare.
It is admirable that you acknowledge that you are not using “reason and evidence to do the most good” and that you presumably accept that you have no leg to stand on when trying to persuade nativists who assign zero weight to people who live in other countries to give more to those who live abroad.
I am using reason and evidence to do the most good within my circumscribed moral framework, of which I don’t aim to persuade anyone at all.
If you don’t aim to persuade anyone else to agree with your moral framework and take action along with you, you’re not doing the most good within your framework.
(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don’t care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)
Karthik could also believe that any attempt to persuade someone to do what Karthik believes is best, would backfire, or that it is intrinsically wrong to persuade another person to do what Karthik believes is good, if they do not already believe the thing is good anyway. Though I agree with the general thrust of your comment.
I’m not sure what you’re looking for. I’ve made it clear that I’m not here to persuade you of my position, and I’m not going to be philosophically strongarmed into doing so. I was just trying to elaborate on a view that I suspect (and upvotes suggest) is common to other people who are not persuaded by Vasco’s argument.