Farmed animals are neglected, where I conclude the annual disability of farmed animals is much larger than that of humans, whereas the annual funding helping farmed animals is much smaller than that helping humans:
I’d also add the cluelessness critique as relevant reading. I think it’s a problem for global health interventions, although realize that one could also argue that it is a problem for animal welfare interventions. In any case it seems highly relevant for this debate.
Probably far beyond as well, right? There’s nothing distinctive about EA projects that make them [EDIT: more] subject to potential far future bad consequences we don’t know about. And even (sane) non-consequentialists should care about consequences amongst other things, even if they don’t care only about consequences.
I dispute this. I’m admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we don’t know the sign and magnitude of these “unintended” effects (partly because we don’t in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the “intended” effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we can’t really weigh them. If you think you can weigh them, then please tell me more.
So I think it’s the saving lives that really gets us into a pickle here—it leads to so much complexity in terms of predictable effects.
There are some EA interventions that don’t involve saving lives and don’t seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I don’t think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greaves’ model there are types of cluelessness that are not problematic, which she calls “simple cluelessness”. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of “non-EA” altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they don’t involve saving lives and are often on quite a small scale so aren’t going to predictably influence things like economic growth. For example, giving food to a soup kitchen—other than helping people who need food it isn’t at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of “non-EA” altruistic actions might not have predictable unintended effects, in large part because they don’t involve saving lives. So I don’t think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
I’m not so sure about that. The link above argues longtermism may evade cluelessness (which I also discuss here) and I provide some additional thoughts on cause areas that may evade the cluelessness critique here.
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
Just to make sure I understand your position—are you saying that the cluelessness critique is valid and that it affects all altruistic actions? So Effective Altruism and altruism generally are doomed enterprises?
I don’t buy that we are clueless about all actions. For example, I would say that something like expanding our moral circle to all sentient beings is robustly good in expectation. You can of course come up with stories about why it might be bad, but these stories won’t be as forceful as the overall argument that a world that considers the welfare of all beings (that have the capacity for welfare) important is likely better than one that doesn’t.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.
Some other potentially useful references for this debate:
Emily Oehlsen’s/Open Phil’s response to Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, and the thread that follows, (EDIT) and other comments there.
How good is The Humane League compared to the Against Malaria Foundation? by Stephen Clare and AidanGoth for Founders Pledge (using old cost-effectiveness estimates).
Discussion of the two envelopes problem for moral weights (can get pretty technical):
Tomasik, 2013-2018
Karnofsky, 2018, section 1.1
St. Jules, 2024 (that’s me!)
GiveWell’s marginal cost-effectiveness estimates for their top charities, of course
Some recent-ish (mostly) animal welfare intervention cost-effectiveness estimates:
Track records of Charity Entrepreneurship-incubated charities (animal and global health)
Charity Entrepreneurship prospective animal welfare reports and global health reports
Charity Entrepreneurship Research Training Program (2023) prospective reports
on animal welfare with cost-effectiveness estimates: Intervention Report: Ballot initiatives to improve broiler welfare in the US by Aashish K and Exploring Corporate Campaigns Against Silk Retailers by Zuzana Sperlova and Moritz Stumpe
Electric Shrimp Stunning: a Potential High-Impact Donation Opportunity by MHR
Prospective cost-effectiveness of farmed fish stunning corporate commitments in Europe by Sagar K Shah for Rethink Priorities
Estimates for some Healthier Hens interventions ideas (and a comment thread)[1]
Emily Oehlsen’s/Open Phil’s response above
Animal welfare cost-effectiveness estimates based on older intervention work:
Corporate campaigns affect 9 to 120 years of chicken life per dollar spent by saulius for Rethink Priorities
A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives by Laura Duffy for Rethink Priorities
Megaprojects for animals by JamesÖz and Neil_Dullaghan🔹
Meat-eater problem and related posts
Wild animal effects of human population and diet change:
How Does Vegetarianism Impact Wild-Animal Suffering? by Brian Tomasik and his related posts
Does the Against Malaria Foundation Reduce Invertebrate Suffering? by Brian Tomasik
Finding bugs in GiveWell’s top charities by Vasco Grilo🔸
My recent posts on fishing: Sustainable fishing policy increases fishing, and demand reductions might, too and The moral ambiguity of fishing on wild aquatic animal populations.
Healthier Hens is shutting down or has already shut down, according to the Charity Entrepreneurship Newsletter. Their website is also down.
Thanks, Michael! Here are a few more posts:
Founders Pledge’s Climate Change Fund might be more cost-effective than GiveWell’s top charities, but it is much less cost-effective than corporate campaigns for chicken welfare?, where I Fermi estimate corporate campaigns for chicken welfare are 1.51 k times as cost-effective as GiveWell’s top charities.
Cost-effectiveness of buying organic instead of barn eggs, where I Fermi estimate that buying organic instead of barn eggs in the European Union is 2.11 times as cost-effective as GiveWell’s top charities.
Cost-effectiveness of School Plates, where I Fermi estimate that School Plates[1] is 60.2 times as cost-effective as GiveWell’s top charities.
Farmed animals are neglected, where I conclude the annual disability of farmed animals is much larger than that of humans, whereas the annual funding helping farmed animals is much smaller than that helping humans:
Program aiming to increase the consumption of plant-based foods at schools and universities in the United Kingdom (UK).
I’d also add the cluelessness critique as relevant reading. I think it’s a problem for global health interventions, although realize that one could also argue that it is a problem for animal welfare interventions. In any case it seems highly relevant for this debate.
This critique seems to me to be applicable to the entire EA project.
Probably far beyond as well, right? There’s nothing distinctive about EA projects that make them [EDIT: more] subject to potential far future bad consequences we don’t know about. And even (sane) non-consequentialists should care about consequences amongst other things, even if they don’t care only about consequences.
I dispute this. I’m admittedly not entirely sure but here is my best explanation.
A lot of EA interventions involve saving lives which influences the number of people who will live in the future. This in turn, we know, will influence the following (to just give two examples):
The number of animals who will be killed for food (i.e. impacting animal welfare).
CO2 emissions and climate change (i.e. impacting the wellbeing of humans and wild animals in the future).
Importantly, we don’t know the sign and magnitude of these “unintended” effects (partly because we don’t in fact know if saving lives now causes more or fewer people in the future). But we do know that these unintended effects will predictably happen and that they will swamp the size of the “intended” effect of saving lives. This is where the complex cluelessness comes in. Considering predictable effects (both intended and not intended), we can’t really weigh them. If you think you can weigh them, then please tell me more.
So I think it’s the saving lives that really gets us into a pickle here—it leads to so much complexity in terms of predictable effects.
There are some EA interventions that don’t involve saving lives and don’t seem to me to run into a cluelessness issue e.g. expanding our moral circle through advocacy, building AI governance structures to (for instance) promote global cooperation, global priorities research. I don’t think these interventions run into the complex cluelessness issue because, in my opinion, it seems easy to say that the expected positives outweigh expected negatives. I explain this a little more in this comment chain.
Also, note that under Greaves’ model there are types of cluelessness that are not problematic, which she calls “simple cluelessness”. An example is if we are deciding whether to conceive a child on a Tuesday or a Wednesday. Any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices.
A lot of “non-EA” altruistic actions I think we will have simple cluelessness about (rather than complex), in large part because they don’t involve saving lives and are often on quite a small scale so aren’t going to predictably influence things like economic growth. For example, giving food to a soup kitchen—other than helping people who need food it isn’t at all predictable what other unintended effects will be so we have evidential symmetry and can ignore them. Basically, a lot of “non-EA” altruistic actions might not have predictable unintended effects, in large part because they don’t involve saving lives. So I don’t think they will run us into the cluelessness issue.
I need to think about this more but would welcome thoughts.
You don’t think a lot of non-EA altruistic actions involve saving lives??
Yes but if I were to ask my non-EA friends what they give to (if they give to anything at all) they will say things like local educational charities, soup kitchens, animal shelters etc. I do think EA generally has more of a focus on saving lives.
Yes I agree with this
I’m not so sure about that. The link above argues longtermism may evade cluelessness (which I also discuss here) and I provide some additional thoughts on cause areas that may evade the cluelessness critique here.
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
Just to make sure I understand your position—are you saying that the cluelessness critique is valid and that it affects all altruistic actions? So Effective Altruism and altruism generally are doomed enterprises?
I don’t buy that we are clueless about all actions. For example, I would say that something like expanding our moral circle to all sentient beings is robustly good in expectation. You can of course come up with stories about why it might be bad, but these stories won’t be as forceful as the overall argument that a world that considers the welfare of all beings (that have the capacity for welfare) important is likely better than one that doesn’t.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.
Great list (and thanks for the shoutout)!
I would add @Laura Duffy’s How Can Risk Aversion Affect Your Cause Prioritization? post
Thank you! These are great. I’ll link the comment in the text.