Some thoughts on vegetarianism and veganism

I feel pretty confused about whether I, as an effective altruist, should be vegetarian/​vegan (henceforth abbreviated veg*n). I don’t think I’ve seen anyone explicitly talk about the arguments which feel most compelling to me, so I thought I’d do that here, in a low-effort way.

I think that factory farming is one of the worst ongoing moral atrocities. But most of the arguments I’ve heard for veg*nism, which I found compelling a few years ago, hinge on the effects that my personal consumption would have on decreasing factory farming (and sometimes on climate change). I now don’t find this line of thinking persuasive—my personal consumption decisions just have such a tiny effect compared to my career/​donation decisions that it feels like I shouldn’t pay much attention to their direct consequences (beyond possibly donating to offset them).

But there are three other arguments which seem more compelling. First is a deontological argument: if you think something is a moral atrocity, you shouldn’t participate in it, even if you offset the effects of your contribution. In general, my utilitarian intuitions are much stronger than my deontological ones, but I do think that following deontological principles is often a very good heuristic for behaving morally. The underlying reason is that humans by default think more naturally in terms of black-and-white categories than shades of grey. As Yudkowsky writes:

Any rule that’s not labeled “absolute, no exceptions” lacks weight in people’s minds. So you have to perform that the “Don’t kill” commandment is absolute and exceptionless (even though it totally isn’t), because that’s what it takes to get people to even hesitate. To stay their hands at least until the weight of duty is crushing them down. A rule that isn’t even absolute? People just disregard that whenever.

Without strong rules in place it’s easy to reason our way into all sorts of behaviour. In particular, it’s easy to underestimate the actual level of harm that certain actions cause—e.g. thinking of the direct effects of eating meat but ignoring the effects of normalising eating meat, or normalising “not making personal sacrifices on the basis of moral arguments”, or things like that. And so implementing rules like “never participate in moral atrocities” sends a much more compelling signal than “only participate in moral atrocities when you think that’s net-positive”. That signal helps set an example for people around you—which seems particularly important if you spend time with people who are or will become influential. But it also strengthens your own self-identity as someone who prioritises the world going well.

Then there’s a community-level argument about what we want EA to look like. Norms about veg*nism within the community help build a high-trust environment (since veg*nism is a costly signal), and increase internal cohesion, especially between different cause areas. At the very least, these arguments justify not serving animal products at EA conferences.

Lastly, there’s an argument about how I (and the EA community) are seen by wider society. Will MacAskill sometimes uses the phrase “moral entrepreneurs”, which I think gestures in the right direction: we want to be ahead of the curve, identifying and building on important trends in advance. I expect that veg*nism will become much more mainstream than it currently is; insofar as EA is a disproportionately veg*n community, this will likely bolster our moral authority.

I think there are a few arguments cutting the other way, though. I think one key concern is that these arguments are kinda post-hoc. It’s not necessarily that they’re wrong, it’s more like: I originally privileged the hypothesis that veg*nism is a good idea based on arguments about personal impact which I now don’t buy. And so, now that I’m thinking more about it, I’ve found a bunch of arguments which support it—but I suspect I could construct similarly compelling arguments for the beneficial consequences of a dozen other personal life choices (related to climate change, social justice, capitalism, having children, prison reform, migration reform, drug reform, etc). In other words: maybe the world is large enough that we have to set a high threshold for deontological arguments, in order to avoid being swamped by moral commitments.

Secondly, on a community level, EA is the one group that is most focused on doing really large amounts of good. And so actually doing cost-benefit analyses to figure out that most personal consumption decisions aren’t worth worrying about seems like the type of thing we want to reinforce in our community. Perhaps what’s most important to protect is this laser-focus on doing the most good without trying to optimise too hard for the approval of the rest of society—because that’s how we can keep our edge, and avoid dissolving into mainstream thinking.

Thirdly, the question of whether going veg*n strengthens your altruistic motivations is an empirical one which I feel pretty uncertain about. There may well be a moral licensing effect where veg*ns feel (disproportionately) like they’ve done their fair share of altruistic action; or maybe parts of you will become resentful about these constraints. This probably varies a lot for different people.

Fourthly, I am kinda worried about health effects, especially on short-to-medium-term energy levels. I think it’s the type of thing which could probably be sorted out after a bit of experimentation—but again, from my current perspective, the choice to dedicate that experimentation to maintaining my health instead of, say, becoming more productive feels like a decision I’d only make if I were privileging the intervention of veg*nism over other things I could spend my time and effort on.

I don’t really have any particular conclusion to this post; I wrote it mainly to cover a range of arguments that people might not have seen before, and also to try and give a demonstration of the type of reasoning I want to encourage in EA. (A quick search also turns up a post by Jess Whittlestone covering similar considerations.) If I had to give a recommendation, I think probably the dominant factor is how your motivational structure works, in particular whether you’ll interpret the additional moral constraint more as a positive reinforcement of your identity as an altruist, or more as something which drains or stresses you. (Note though that, since people systematically overestimate how altruistic they are, I expect that most people will underrate the value of the former. On the other hand, effective altruists are one of the populations most strongly selected for underrating the importance of avoiding the latter.)