Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:
However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those that stand to be helped.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
And more generally, why would it be better (or even sometimes obligatory) to do something that’s worse for others overall than an alternative?
Going back to relying just on intuition and not listening to others would also seem pretty unvirtious (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoming lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better.
Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuang’s observation further up
I think the book is targeted at an imagined left-wing young person who the authors think would be “tricked” into EA because they misread certain claims that EA puts forward. It’s a form of memeplex competition.
I now feel like maybe the author isn’t warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).
It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Here’s the full paragraph:
First, you have more reason to trust your judgments than you assume. What motivates you to give to make things better for animals? What kinds of mistreatment of animals are you most concerned about? Of the many kinds of activities benefitting animals, which are you most drawn to? Reflect on your priorities as a starting point. Do not be tempted by claims of Effective Altruism or any other scheme to offer an objective rational basis for your decision. This is complicated stuff. It is much more complicated than any decision-making system can deal with. Your own commitments are likely to be a better initial basis for decision-making than any claimed objective system.
And on this …
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those they’ll help.
Yeah sure, though I don’t think this really gets around the objection (at least not for me—it’s based on intuition, after all). Even if you build character in this way in order to help ppl/animals in the future, it’s still the case that you’re not helping the animals you’re helping for their own sake, you’re doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I haven’t read much about virtue ethics either so I could be strawmanning it) here:
Deontological duties are actually rooted in what’s good/bad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from what’s good for people/animals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with what’s virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at what’s good/bad for those affected (and what goes on inside them seems primary).
Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)
Edit: Also, with Kant, in particular, my impression is that he doesn’t go, “I’ve done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instance”, but rather “in any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim we’re acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptable”. Not sure if I’m communicating this clearly …
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/misleading picture of things?
Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
Yes, I imagined spending some time in a simulator. I guess I’m making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of what’s right or praiseworthy about it.
On Kantianism, …
There are degrees of coercion, and I’m not sure whether to think of that as “there are two distinct categories of action, the coercive and the non-coercive, but we don’t know exactly where to draw the line between them” or “coerciveness is a continuous property of actions; there can be more or less of it”. (I mean by “coerciveness” here something like “taking someone’s decision out of their own hands”, and IMO taking it as important means prioritising, to some degree, respect for people’s (and animals’) right to make their own decisions over their well-being.)
So my answer to these questions is: It depends on the details, but I expect that I’d judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your advocacy).
I think it depends on whether you also have the person’s interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then that’s fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then that’s using as a mere means. (A maxim always contains its ends, i.e. the agent’s intention.)
Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is representative?
Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).
Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this?
Time for a caveat: I’d never have the audacity to tell people (such as yourself) in the effective animal advocacy space what’s best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph aren’t recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.
By “direct impact”, you mean you won’t save any specific animal by e.g. going vegan, you’re just likely preventing some future suffering—something like that? Interesting, I’d guess not disclosing this is fine, due to a combination of (1) people probably don’t really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But I’m definitely open to changing my mind here.
Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)?
I hadn’t thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.
Btw, I’d be really curious to hear your take on all these questions.
What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldn’t find the EV argument compelling, given how unlikely the impact is in large markets.
I think it’s probably good to promote health resources to new vegans and reach them pretty early with these, but I’d worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesn’t really work or could be somewhat worse than other diets for them (but it’s worth finding out for yourself, given how important this is), and that seems fine. But I wouldn’t want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.
I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as it’s proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what they’re pushing) and a lot of people have probably gone vegan because of them.
Misleasing/deception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didn’t get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.
Going back to relying just on intuition and not listening to others would also seem pretty unvirtuous (unwise/imprudent) to me, but (without having read the chapter), I doubt the author would go that far, given his advice to look “for a good way to support a project that aligns with your priorities, is well-run, and looks like it has a good chance of achieving its goals”. I would also guess he doesn’t mean you should never question your priorities (or moral intuitions) or investigate where specific lines of moral reasoning lead.
I think he’s mostly skeptical about relying primarily on one particular system, especially any simple one, because it would be likely to miss so much of what matters and so cause harm or miss out on doing better. But I think this is something that has been expressed before by EAs, including people at Open Phil, typically with respect to worldview diversification:
(E.g. the train to crazy town) https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/
https://forum.effectivealtruism.org/posts/8wWYmHsnqPvQEnapu/getting-on-a-different-train-can-effective-altruism-avoid
https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous
“Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous…I’m just not ready to accept that as a devastating argument against it.” https://80000hours.org/podcast/episodes/alexander-berger-improving-global-health-wellbeing-clear-direct-ways/
However, it seems EAs are willing to give much greater weight to philosophical arguments and the recommendations of specific systems.
On virtue ethics (although to be clear, I’ve read very little about virtue ethics, so may be way off), another way we might think about this is that the virtue of charity, say, is one of the ways you capture others mattering. You express and develop the virtue of charity to help others, precisely because other people and their struggles matter. It’s good for you, too, but it’s good for you because it’s good for others, like how satisying your other-regarding preferences is good for you. Getting others to develop the virtue of charity is also good for them, but it’s good for them because it’s good for those that stand to be helped.
The argument you make against virtue ethics is also similar to an argument I’d make against non-instrumental deontological constraints (and I’ve also read very little about deontology): such constraints seem like a preoccupation with keeping your own hands clean instead of doing what’s better for moral patients. And helping others abide by these constraints, similar to developing others’ virtues, seems bad if it leads to worse outcomes for others. But all of this is supposed to capture ways others matter.
And more generally, why would it be better (or even sometimes obligatory) to do something that’s worse for others overall than an alternative?
Yeah that makes sense to me. My original reading was probably too uncharitable. Though when I read zchuang’s observation further up
I now feel like maybe the author isn’t warning readers about the perils of focusing on a particular worldview, but specifically on worldviews like EA, that often take one measure and optimise it in practice (even if the philosophy permits a pluralistic view on value).
It does seem like their approach would have the effect of making people defer less, or biases them towards their original views and beliefs, though? Here’s the full paragraph:
And on this …
Yeah sure, though I don’t think this really gets around the objection (at least not for me—it’s based on intuition, after all). Even if you build character in this way in order to help ppl/animals in the future, it’s still the case that you’re not helping the animals you’re helping for their own sake, you’re doing it for some other reason. Even if that other reason is to help other animals in the future, that still feels off to me.
I think this is a pretty solid objection, but I see two major differences between deontology and virtue ethics (disclaimer: I haven’t read much about virtue ethics either so I could be strawmanning it) here:
Deontological duties are actually rooted in what’s good/bad for the targets of actions, whereas (in theory at least) the best way of building virtue could be totally disconnected from what’s good for people/animals? (The nature of the virtue itself could not be disconnected, just the way you come by it.) E.g. maybe the best way of building moral character is to step into a character building simulator rather than going to an animal sanctuary? It feels like (and again I stress my lack of familiarity) a virtue ethicist comes up with what’s virtuous by looking at the virtue-haver (and of course what happens to others can affect that, but what goes on inside the virtue-haver seems primary), whereas a deontologist comes up with duties by looking at what’s good/bad for those affected (and what goes on inside them seems primary).
Kantianism in particular has an injunction against using others as mere means, making it impossible to make moral decisions without considering those affected by the decision. (Though, yeah, I know there are trolley-like situations where you kind of privilege the first-order affected over the second-order affecteds.)
Edit: Also, with Kant, in particular, my impression is that he doesn’t go, “I’ve done this abstract, general reasoning and came to the conclusion that lying is categorically wrong, so therefore you should never lie in any particular instance”, but rather “in any particular instance, we should follow this general reasoning process (roughly, of identifying the maxim we’re acting according to, and seeing if that maxim is acceptable), and as it happens, I note that the set of maxims that involve lying all seem unacceptable”. Not sure if I’m communicating this clearly …
I would expect that living your life in a character building simulator would itself be unvirtuous. You can’t actually express most virtues in such a setting, because the stakes aren’t real. Consistently avoiding situations where there are real stakes seems cowardly, imprudent, uncharitable, etc.. Spending some time in such simulators could be good, though.
On Kantianism, would trying to persuade people to not harm animals or to help animals mean using those people as mere means? Or, as long as they aren’t harmed, it’s fine? Or, as long as you’re not misleading them, you’re helping them make more informed decisions, which respects and even promotes their agency (even if your goal is actually not this, but just helping animals, and you just avoid misleading in your afvocacy). Could showing people factory farm or slaughterhouse footage be too emotionally manipulative, whether or not that footage is respresentative? Should we add the disclaimer to our advocacy that any individual abstaining from animal products almost certainly has no “direct” impact on animals through this? Should we be more upfront about the health risks of veganism (if done poorly, which seems easy to do)? And add various other disclaimers and objections to give a less biased/misleading picture of things?
Could it be required that we include these issues with all advocacy, to ensure no one is misled into going vegan or becoming an advocate in the first place?
Yes, I imagined spending some time in a simulator. I guess I’m making the claim that, in some cases at least, virtue ethics may identify a right action but seemingly without giving a good (IMO) account of what’s right or praiseworthy about it.
There are degrees of coercion, and I’m not sure whether to think of that as “there are two distinct categories of action, the coercive and the non-coercive, but we don’t know exactly where to draw the line between them” or “coerciveness is a continuous property of actions; there can be more or less of it”. (I mean by “coerciveness” here something like “taking someone’s decision out of their own hands”, and IMO taking it as important means prioritising, to some degree, respect for people’s (and animals’) right to make their own decisions over their well-being.)
So my answer to these questions is: It depends on the details, but I expect that I’d judge some things to be clearly coercive, others to be clearly fine, and to be unsure about some borderline cases. More specifically (just giving my quick impressions here):
I think it depends on whether you also have the person’s interests in mind. If you do it e.g. intending to help them make a more informed or reasoned decision, in accordance with their will, then that’s fine. If you do it trying to make them act against their will (for example, by threatening or blackmailing them, or by lying or withholding information, such that they make a different decision than had they known the full picture), then that’s using as a mere means. (A maxim always contains its ends, i.e. the agent’s intention.)
Yeah, I think it could, but I also think it could importantly inform people of the realities of factory farms. Hard to say whether this is too coercive, it probably depends on the details again (what you show, in which context, how you frame it, etc.).
Time for a caveat: I’d never have the audacity to tell people (such as yourself) in the effective animal advocacy space what’s best to do there, and anyway give some substantial weight to utilitarianism. So what precedes and follows this paragraph aren’t recommendations or anything, nor is it my all-things-considered view, just what I think one Kantian view might entail.
By “direct impact”, you mean you won’t save any specific animal by e.g. going vegan, you’re just likely preventing some future suffering—something like that? Interesting, I’d guess not disclosing this is fine, due to a combination of (1) people probably don’t really care that much about this distinction, and think preventing future suffering is ~just as good, (2) people are usually already aware of something like this (at least upon reflection), and (3) people might have lots of other motivations to do the thing anyway, e.g. not wanting to contribute to an intensively suffering-causing system, which make this difference irrelevant. But I’m definitely open to changing my mind here.
I hadn’t thought about it, but it seems reasonable to me to guide people to health resources for vegans when presenting arguments in favour of veganism, given the potentially substantial negative effects of doing veganism without knowing how to do it well.
Btw, I’d be really curious to hear your take on all these questions.
What I have in mind for direct impact is causal inefficacy. Markets are very unlikely to respond to your purchase decisions, but we have this threshold argument that the expected value is good (maybe in line with elasticities), because in the unlikely event that they do respond, the impact is very large. But most people probably wouldn’t find the EV argument compelling, given how unlikely the impact is in large markets.
I think it’s probably good to promote health resources to new vegans and reach them pretty early with these, but I’d worry that if we pair this information with all the advocacy we do, we could undermine ourselves. We could share links to resources, like Challenge22 (they have nutritionists and dieticians), VeganHealth and studies with our advocacy, and maybe even say being vegan can take some effort to do healthfully and for some people it doesn’t really work or could be somewhat worse than other diets for them (but it’s worth finding out for yourself, given how important this is), and that seems fine. But I wouldn’t want to emphasize reasons not to go vegan or the challenges with being vegan when people are being exposed to reasons to go vegan, especially for the first time. EDIT: people are often looking for reasons not to go vegan, so many will overweight them, or use confirmation bias when assessing the evidence.
I guess the other side is that deception or misleading (even by omission) in this case could be like lying to the axe murderer, and any reasonable Kantian should endorse lying in that case, and in general should sometimes endorse instrumental harm to prevent someone from harming another, including the use of force, imprisonment, etc. as long as it’s proportionate and no better alternatives are available to achieve the same goal. What the Health, Cowspiracy and some other documentaries might be better examples of deception (although the writers themselves may actually believe what they’re pushing) and a lot of people have probably gone vegan because of them.
Misleasing/deception could also be counterproductive, though, by giving others the impression that vegans are dishonest, or having lots of people leave because they didn’t get resources to manage their diets well, which could even give the overall impression that veganism is unhealthy.