I read “A Christian Critique of the Effective Altruism Approach to Animal Philanthropy” as a sampling. I picked it simply because it piqued my interest. I don’t know whether it’s representative of the book as a whole. Some thoughts …
This essay is clearly not aimed at me, since it critiques EA from the point of view of Christian ethics, and while there are definitely Christian EAs, I personally find Christianity (and by extension, Christian ethics) highly implausible. I also find deontology and consequentialism sounder than virtue ethics. So it’s no surprise that I find the author’s worldview in the essay unconvincing, but the essay also presents arguments that don’t really rely on Christianity being true, which I’ll get to in a bit.
The essay proceeds roughly along these lines:
EA is founded on utilitarianism, and utilitarianism has issues.
In particular, there are problems when applying EA to animal advocacy.
The author’s Christian ethical framework is superiour to the EA framework when deciding where to donate money to help animals.
Now, as for what I think about it …
First, on utilitarianism.
The essay states: “Effective Altruism is founded on utilitarianism, and utilitarianism achieves simplicity in its consideration of only one morally relevant aspect of a situation. [...] The heart of what’s wrong with Effective Altruism is a fundamental defect of utilitarianism: there are important morally relevant features of any situation that are not reducible to evaluating the results of actions and are not measurable or susceptible to calculation. This makes it inevitable that features of a situation to which numbers can be assigned are exaggerated in significance, while others are neglected.”
This is the key argument that the essay presents against EA: utilitarianism is wrong since it dismisses non-welfarist goods, and therefore EA is wrong since it’s a subset of utilitarianism.
I take this to be an argument about the philosophy of EA, not about the way it’s practiced. But IMO it’s false to say that EA is founded on utilitarianism (assuming we take “founded” to mean “philosophically grounded in” rather than “established alongside”). I think the premises EA relies on are weaker than that; they’re something more like beneficentrism: “The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.”
This ends up mattering, because it means that EA can be practiced perfectly well while accepting deontic constraints, or non-welfarist values. I reckon you just need to think it’s good to promote the good (this works for many different, though not all, definitions of the good), and to actually put that into practice and do it effectively.
There’s no point in re-litigating the soundness of utilitarianism here, but though I lean deontological, as mentioned I find consequentialism (and utilitarianism) more plausible than Christian and/or virtue ethics. Anyway, I think even if utilitarianism were wrong or bad, EA would still be good and right, on grounds similar to beneficentrism.
Second, on measuring and comparing.
The essay argues that, though EAs love quantifying and measuring things, and then comparing things in light of that, this is a false promise: “All [EA] is doing is taking one measurable feature of a situation and representing it as maximal effectiveness. A Christian ethical analysis of making decisions about spending money, or anything else, would always be concerned to bring due attention to all the ethical moving parts.”
With animals in particular, it’s extremely hard to compare different kinds of good, and we should take a pluralistic approach to doing so: “How do you decide between supporting an animal sanctuary offering the opportunity for previously farmed animals to live out the remainder of their lives in comfort, or a campaign to require additional environmental enrichment in broiler chicken sheds, or the promotion of plant-based diets? Each is likely to have beneficial impacts on animals, but they are of very different kinds. The animal sanctuary is offering current benefits to the particular group of animals it’s looking after. If successful, the broiler chicken campaign is likely to affect many more animals, but with a smaller impact on each. If the promotion of plant-based diets is successful on a large scale, it could reduce the demand for broiler chickens together with other animal products, but it might be hard to demonstrate the long-term effects of a particular campaign.”
For example, giving to the farm sanctuary provides a lot of good that isn’t easily measurable: “People have the experience of coming to a farmed animal sanctuary and encountering animals that are not being used in production systems. They have an opportunity to recognize the particularities of the animals’ lives, such as what it means for this kind of animal to flourish. This encounter might well be transformative in the person’s understanding of their relationship with farmed animals.” And a farm sanctuary may be better allow humans to develop their virtue: “It would be hard to measure the effectiveness of that kind of education and character development in Effective Altruism terms.”
As an aside, here’s an issue I have with virtue ethics. I think it’s perverse to think that doing something good for an animal (or human) is good because it allows one to develop one’s virtue. Surely it’s good to save animals from the horrific suffering they’re subjected to in factory farms for the sake of the animals themselves, and the important thing here is what happens to them, what’s good and bad for those whose suffering cries out that we do something?
So when I read: “If [...] you take the shortcut of just getting people to buy plant-based meat because it tastes good or costs less, as soon as either of those things change in a particular context and it becomes advantageous for people to behave in ways that result in bad treatment of animals, they have no reason to do otherwise.” I can’t help but think, Well, if I’m a pig in a factory farm, I probably don’t give a fuck whether people stop eating meat because they prefer the taste of Impossible Pork or because they Saw The Light, I just want to get out of my shit-filled seven-by-two-feet gestation crate!
(Of course, if getting people to See The Light is the best way of getting fewer sows in gestation crates, I think EAs would happily endorse that strategy! That’s just an empirical question. But it’s quite a different thing to say that getting people to See The Light is better even though it leads to more pigs in gestation crates.)
Next, the author presents the systemic change argument against EA. In particular, the essay argues that EA’s focus on measurements and data (1) causes EAs to be short-sighted, focusing on small, measurable wins at the expense of large, hard-to-measure wins, and (2) causes EAs to ignore or miss harder-to-measure second-order effects.
(The author does write that EAs could just do the better thing if there’s a better thing to do. But this won’t help, because EA’s definition of “better” is lacking: it still dismisses (writes the author) all non-welfarist goods.)
I don’t want to rehash that debate here as it’s already been discussed at length elsewhere.
Third, the author presents an alternative to EA.
Don’t get your hopes up, though. “The bad news is that there is no simple alternative Christian procedure for identifying the best options for giving.”
Nonetheless, the author ventures three thoughts …
First, you should trust your judgment: “Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.”
This seems basically like “trust your intuition / don’t listen to others” to me, but I think people’s intuition is often wrong and inconsistent, that listening to others allows you to form better views, and that if you care about achieving some goal (e.g. helping animals), you really should look at the evidence and use reason (though your intuitions are also evidence).
Second, remember that the most salient cause isn’t necessarily the best: “It is easy to get the public to be concerned about big fluffy animals like pandas that they’ve seen in nature documentaries and who live far away. It is harder to get people interested in the farmed animals who live in warehouses not far away but hidden from view.”
I, and I’d imagine all EAs, agree with this one! I also think it’s in tension with the first suggestion: often people’s commitments and personal judgments are closely connected with what they’ve been exposed to, because why wouldn’t they be?
Third, don’t ask for too much: “It is unhelpful to think that you are searching for the single most effective way your money can be used. Instead, you are looking for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals.”
I guess this may be true (though depressing) if it’s true that we’re clueless and can’t compare causes. For reasons mentioned above, I think we can (and must) compare, but I get why the author ends up here given their other beliefs.
Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:
However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those that stand to be helped.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
And more generally, why would it be better (or even sometimes obligatory) to do something that’s worse for others overall than an alternative?
Going back to relying just on intuition and not listening to others would also seem pretty unvirtious (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoming lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better.
Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuang’s observation further up
I think the book is targeted at an imagined left-wing young person who the authors think would be “tricked” into EA because they misread certain claims that EA puts forward. It’s a form of memeplex competition.
I now feel like maybe the author isn’t warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).
It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Here’s the full paragraph:
First, you have more reason to trust your judgments than you assume. What motivates you to give to make things better for animals? What kinds of mistreatment of animals are you most concerned about? Of the many kinds of activities benefitting animals, which are you most drawn to? Reflect on your priorities as a starting point. Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.
And on this …
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those they’ll help.
Yeah sure, though I don’t think this really gets around the objection (at least not for me—it’s based on intuition, after all). Even if you build character in this way in order to help ppl/animals in the future, it’s still the case that you’re not helping the animals you’re helping for their own sake, you’re doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I haven’t read much about virtue ethics either so I could be strawmanning it) here:
Deontological duties are actually rooted in what’s good/bad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from what’s good for people/animals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with what’s virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at what’s good/bad for those affected (and what goes on inside them seems primary).
Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)
Edit: Also, with Kant, in particular, my impression is that he doesn’t go, “I’ve done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instance”, but rather “in any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim we’re acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptable”. Not sure if I’m communicating this clearly …
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/misleading picture of things?
Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
Yes, I imagined spending some time in a simulator. I guess I’m making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of what’s right or praiseworthy about it.
On Kantianism, …
There are degrees of coercion, and I’m not sure whether to think of that as “there are two distinct categories of action, the coercive and the non-coercive, but we don’t know exactly where to draw the line between them” or “coerciveness is a continuous property of actions; there can be more or less of it”. (I mean by “coerciveness” here something like “taking someone’s decision out of their own hands”, and IMO taking it as important means prioritising, to some degree, respect for people’s (and animals’) right to make their own decisions over their well-being.)
So my answer to these questions is: It depends on the details, but I expect that I’d judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your advocacy).
I think it depends on whether you also have the person’s interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then that’s fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then that’s using as a mere means. (A maxim always contains its ends, i.e. the agent’s intention.)
Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is representative?
Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).
Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this?
Time for a caveat: I’d never have the audacity to tell people (such as yourself) in the effective animal advocacy space what’s best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph aren’t recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.
By “direct impact”, you mean you won’t save any specific animal by e.g. going vegan, you’re just likely preventing some future suffering—something like that? Interesting, I’d guess not disclosing this is fine, due to a combination of (1) people probably don’t really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But I’m definitely open to changing my mind here.
Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)?
I hadn’t thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.
Btw, I’d be really curious to hear your take on all these questions.
What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldn’t find the EV argument compelling, given how unlikely the impact is in large markets.
I think it’s probably good to promote health resources to new vegans and reach them pretty early with these, but I’d worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesn’t really work or could be somewhat worse than other diets for them (but it’s worth finding out for yourself, given how important this is), and that seems fine. But I wouldn’t want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.
I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as it’s proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what they’re pushing) and a lot of people have probably gone vegan because of them.
Misleasing/deception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didn’t get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.
I read “A Christian Critique of the Effective Altruism Approach to Animal Philanthropy” as a sampling. I picked it simply because it piqued my interest. I don’t know whether it’s representative of the book as a whole. Some thoughts …
This essay is clearly not aimed at me, since it critiques EA from the point of view of Christian ethics, and while there are definitely Christian EAs, I personally find Christianity (and by extension, Christian ethics) highly implausible. I also find deontology and consequentialism sounder than virtue ethics. So it’s no surprise that I find the author’s worldview in the essay unconvincing, but the essay also presents arguments that don’t really rely on Christianity being true, which I’ll get to in a bit.
The essay proceeds roughly along these lines:
EA is founded on utilitarianism, and utilitarianism has issues.
In particular, there are problems when applying EA to animal advocacy.
The author’s Christian ethical framework is superiour to the EA framework when deciding where to donate money to help animals.
Now, as for what I think about it …
First, on utilitarianism.
The essay states: “Effective Altruism is founded on utilitarianism, and utilitarianism achieves simplicity in its consideration of only one morally relevant aspect of a situation. [...] The heart of what’s wrong with Effective Altruism is a fundamental defect of utilitarianism: there are important morally relevant features of any situation that are not reducible to evaluating the results of actions and are not measurable or susceptible to calculation. This makes it inevitable that features of a situation to which numbers can be assigned are exaggerated in significance, while others are neglected.”
This is the key argument that the essay presents against EA: utilitarianism is wrong since it dismisses non-welfarist goods, and therefore EA is wrong since it’s a subset of utilitarianism.
I take this to be an argument about the philosophy of EA, not about the way it’s practiced. But IMO it’s false to say that EA is founded on utilitarianism (assuming we take “founded” to mean “philosophically grounded in” rather than “established alongside”). I think the premises EA relies on are weaker than that; they’re something more like beneficentrism: “The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.”
This ends up mattering, because it means that EA can be practiced perfectly well while accepting deontic constraints, or non-welfarist values. I reckon you just need to think it’s good to promote the good (this works for many different, though not all, definitions of the good), and to actually put that into practice and do it effectively.
There’s no point in re-litigating the soundness of utilitarianism here, but though I lean deontological, as mentioned I find consequentialism (and utilitarianism) more plausible than Christian and/or virtue ethics. Anyway, I think even if utilitarianism were wrong or bad, EA would still be good and right, on grounds similar to beneficentrism.
Second, on measuring and comparing.
The essay argues that, though EAs love quantifying and measuring things, and then comparing things in light of that, this is a false promise: “All [EA] is doing is taking one measurable feature of a situation and representing it as maximal effectiveness. A Christian ethical analysis of making decisions about spending money, or anything else, would always be concerned to bring due attention to all the ethical moving parts.”
With animals in particular, it’s extremely hard to compare different kinds of good, and we should take a pluralistic approach to doing so: “How do you decide between supporting an animal sanctuary offering the opportunity for previously farmed animals to live out the remainder of their lives in comfort, or a campaign to require additional environmental enrichment in broiler chicken sheds, or the promotion of plant-based diets? Each is likely to have beneficial impacts on animals, but they are of very different kinds. The animal sanctuary is offering current benefits to the particular group of animals it’s looking after. If successful, the broiler chicken campaign is likely to affect many more animals, but with a smaller impact on each. If the promotion of plant-based diets is successful on a large scale, it could reduce the demand for broiler chickens together with other animal products, but it might be hard to demonstrate the long-term effects of a particular campaign.”
For example, giving to the farm sanctuary provides a lot of good that isn’t easily measurable: “People have the experience of coming to a farmed animal sanctuary and encountering animals that are not being used in production systems. They have an opportunity to recognize the particularities of the animals’ lives, such as what it means for this kind of animal to flourish. This encounter might well be transformative in the person’s understanding of their relationship with farmed animals.” And a farm sanctuary may be better allow humans to develop their virtue: “It would be hard to measure the effectiveness of that kind of education and character development in Effective Altruism terms.”
As an aside, here’s an issue I have with virtue ethics. I think it’s perverse to think that doing something good for an animal (or human) is good because it allows one to develop one’s virtue. Surely it’s good to save animals from the horrific suffering they’re subjected to in factory farms for the sake of the animals themselves, and the important thing here is what happens to them, what’s good and bad for those whose suffering cries out that we do something?
So when I read: “If [...] you take the shortcut of just getting people to buy plant-based meat because it tastes good or costs less, as soon as either of those things change in a particular context and it becomes advantageous for people to behave in ways that result in bad treatment of animals, they have no reason to do otherwise.” I can’t help but think, Well, if I’m a pig in a factory farm, I probably don’t give a fuck whether people stop eating meat because they prefer the taste of Impossible Pork or because they Saw The Light, I just want to get out of my shit-filled seven-by-two-feet gestation crate!
(Of course, if getting people to See The Light is the best way of getting fewer sows in gestation crates, I think EAs would happily endorse that strategy! That’s just an empirical question. But it’s quite a different thing to say that getting people to See The Light is better even though it leads to more pigs in gestation crates.)
Next, the author presents the systemic change argument against EA. In particular, the essay argues that EA’s focus on measurements and data (1) causes EAs to be short-sighted, focusing on small, measurable wins at the expense of large, hard-to-measure wins, and (2) causes EAs to ignore or miss harder-to-measure second-order effects.
(The author does write that EAs could just do the better thing if there’s a better thing to do. But this won’t help, because EA’s definition of “better” is lacking: it still dismisses (writes the author) all non-welfarist goods.)
I don’t want to rehash that debate here as it’s already been discussed at length elsewhere.
Third, the author presents an alternative to EA.
Don’t get your hopes up, though. “The bad news is that there is no simple alternative Christian procedure for identifying the best options for giving.”
Nonetheless, the author ventures three thoughts …
First, you should trust your judgment: “Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.”
This seems basically like “trust your intuition / don’t listen to others” to me, but I think people’s intuition is often wrong and inconsistent, that listening to others allows you to form better views, and that if you care about achieving some goal (e.g. helping animals), you really should look at the evidence and use reason (though your intuitions are also evidence).
Second, remember that the most salient cause isn’t necessarily the best: “It is easy to get the public to be concerned about big fluffy animals like pandas that they’ve seen in nature documentaries and who live far away. It is harder to get people interested in the farmed animals who live in warehouses not far away but hidden from view.”
I, and I’d imagine all EAs, agree with this one! I also think it’s in tension with the first suggestion: often people’s commitments and personal judgments are closely connected with what they’ve been exposed to, because why wouldn’t they be?
Third, don’t ask for too much: “It is unhelpful to think that you are searching for the single most effective way your money can be used. Instead, you are looking for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals.”
I guess this may be true (though depressing) if it’s true that we’re clueless and can’t compare causes. For reasons mentioned above, I think we can (and must) compare, but I get why the author ends up here given their other beliefs.
Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:
(E.g. the train to crazy town) https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/
https://forum.effectivealtruism.org/posts/8wWYmHsnqPvQEnapu/getting-on-a-different-train-can-effective-altruism-avoid
https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous
“Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous…I’m just not ready to accept that as a devastating argument against it.” https://80000hours.org/podcast/episodes/alexander-berger-improving-global-health-wellbeing-clear-direct-ways/
However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those that stand to be helped.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
And more generally, why would it be better (or even sometimes obligatory) to do something that’s worse for others overall than an alternative?
Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuang’s observation further up
I now feel like maybe the author isn’t warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).
It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Here’s the full paragraph:
And on this …
Yeah sure, though I don’t think this really gets around the objection (at least not for me—it’s based on intuition, after all). Even if you build character in this way in order to help ppl/animals in the future, it’s still the case that you’re not helping the animals you’re helping for their own sake, you’re doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.
I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I haven’t read much about virtue ethics either so I could be strawmanning it) here:
Deontological duties are actually rooted in what’s good/bad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from what’s good for people/animals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with what’s virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at what’s good/bad for those affected (and what goes on inside them seems primary).
Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)
Edit: Also, with Kant, in particular, my impression is that he doesn’t go, “I’ve done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instance”, but rather “in any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim we’re acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptable”. Not sure if I’m communicating this clearly …
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/misleading picture of things?
Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?
Yes, I imagined spending some time in a simulator. I guess I’m making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of what’s right or praiseworthy about it.
There are degrees of coercion, and I’m not sure whether to think of that as “there are two distinct categories of action, the coercive and the non-coercive, but we don’t know exactly where to draw the line between them” or “coerciveness is a continuous property of actions; there can be more or less of it”. (I mean by “coerciveness” here something like “taking someone’s decision out of their own hands”, and IMO taking it as important means prioritising, to some degree, respect for people’s (and animals’) right to make their own decisions over their well-being.)
So my answer to these questions is: It depends on the details, but I expect that I’d judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):
I think it depends on whether you also have the person’s interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then that’s fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then that’s using as a mere means. (A maxim always contains its ends, i.e. the agent’s intention.)
Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).
Time for a caveat: I’d never have the audacity to tell people (such as yourself) in the effective animal advocacy space what’s best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph aren’t recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.
By “direct impact”, you mean you won’t save any specific animal by e.g. going vegan, you’re just likely preventing some future suffering—something like that? Interesting, I’d guess not disclosing this is fine, due to a combination of (1) people probably don’t really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But I’m definitely open to changing my mind here.
I hadn’t thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.
Btw, I’d be really curious to hear your take on all these questions.
What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldn’t find the EV argument compelling, given how unlikely the impact is in large markets.
I think it’s probably good to promote health resources to new vegans and reach them pretty early with these, but I’d worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesn’t really work or could be somewhat worse than other diets for them (but it’s worth finding out for yourself, given how important this is), and that seems fine. But I wouldn’t want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.
I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as it’s proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what they’re pushing) and a lot of people have probably gone vegan because of them.
Misleasing/deception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didn’t get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.