Thanks for writing this! Epistemic note: I am engaging in highly motivated reasoning and arguing for veg*n.
As BenStewart mentioned, virtue ethics seems relevant. I would similarly point to Kant’s moral imperative of universalizability: “act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Not engaging in moral atrocities is a case where we should follow such an ideal in my opinion. We should at least consider the implications under moral uncertainty and worldview diversification.
My journey in EA has in large part been a journey of “aligning my life and my choices to my values,” or trying to lead a more ethical life. To this end, it is fairly clear that being veg*n is the ethical thing to do relative to eating animal products (I would note I’m somewhere between vegan and vegetarian, and I think moving toward veganism is ethically better).
The signaling effect of being veg*n seems huge at both an individual and community level. As Luke Freeman mentioned, it would be hard to take EA seriously if we were less veg*n than average. Personally, I would likely not be in EA if being veg*n wasn’t relatively normal. This was a signal to me that these people really care and aren’t just in it when it’s convenient for them. This point seems pretty important and one of the things that hopefully sets EA apart from other communities oriented around doing good. I want to call back Ben Kuhn’s idea from 2013 of trying vs. pretending to try in terms of EA:
“A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors.”
3.5. At an individual level, when I tell people about Intro to EA seminars I can say things like “In Week 3 we read about expanding our moral consideration and animal welfare. I realized that I wasn’t giving animals the moral consideration I think they deserve, and now I try to eat fewer animal products to align my values and my actions.” (I’ve never said it this eloquently). While I haven’t empirically tested it, people seem to like anecdotes like this.
4. I think as a community we’re asking for a lot of trust; something like “we want to align AI to not kill us all, and nobody else is doing it, so you have to trust us to do it.” Maybe this is an argument for hedging under moral uncertainty, or similarly trying to be less radical. I feel like an EA that is mostly veg*n is less radical than one with no veg*ns due to some of the other ethical claims we make (e.g., strong longtermism). Being less radical while still upholding our values sounds like a reasonable spot to be in when (implicitly) asking for the reigns to the future.
4.5 In this awesome paper, Evan Williams argues that hedging against individual possible moral catastrophes is quite difficult. In this case, it appears to me that we can still hedge here, and we should, given our position of influence.
5. Intuitively, diversifying across a range of activities that might be valuable seems useful. So having some things that are 0.01% chance of avoiding x-risk, some things that are 10% chance of reducing animal suffering, some things that are 95% chance of reducing malaria deaths, and 50% chance of reducing the number of animals suffering on factory farms. I need to write out my thoughts on this in more detail, but I think it’s useful to diversify across {chance of having any impact at all}, and not eating animals is a place where we can be pretty sure we’re having an impact in the long term.
On 5, diet change seems very very unlikely to make a difference on an individual level, because of how large the markets are. I think we’re (possibly much) more likely to make a difference through careers and donations. Maybe we have more robust estimates of the expected effects of diet (on farmed animals, at least) than these other things, though. Diversification/hedging seems valuable to me with deep uncertainty or moral uncertainty.
Thanks for writing this! Epistemic note: I am engaging in highly motivated reasoning and arguing for veg*n.
As BenStewart mentioned, virtue ethics seems relevant. I would similarly point to Kant’s moral imperative of universalizability: “act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Not engaging in moral atrocities is a case where we should follow such an ideal in my opinion. We should at least consider the implications under moral uncertainty and worldview diversification.
My journey in EA has in large part been a journey of “aligning my life and my choices to my values,” or trying to lead a more ethical life. To this end, it is fairly clear that being veg*n is the ethical thing to do relative to eating animal products (I would note I’m somewhere between vegan and vegetarian, and I think moving toward veganism is ethically better).
The signaling effect of being veg*n seems huge at both an individual and community level. As Luke Freeman mentioned, it would be hard to take EA seriously if we were less veg*n than average. Personally, I would likely not be in EA if being veg*n wasn’t relatively normal. This was a signal to me that these people really care and aren’t just in it when it’s convenient for them. This point seems pretty important and one of the things that hopefully sets EA apart from other communities oriented around doing good. I want to call back Ben Kuhn’s idea from 2013 of trying vs. pretending to try in terms of EA:
3.5. At an individual level, when I tell people about Intro to EA seminars I can say things like “In Week 3 we read about expanding our moral consideration and animal welfare. I realized that I wasn’t giving animals the moral consideration I think they deserve, and now I try to eat fewer animal products to align my values and my actions.” (I’ve never said it this eloquently). While I haven’t empirically tested it, people seem to like anecdotes like this.
4. I think as a community we’re asking for a lot of trust; something like “we want to align AI to not kill us all, and nobody else is doing it, so you have to trust us to do it.” Maybe this is an argument for hedging under moral uncertainty, or similarly trying to be less radical. I feel like an EA that is mostly veg*n is less radical than one with no veg*ns due to some of the other ethical claims we make (e.g., strong longtermism). Being less radical while still upholding our values sounds like a reasonable spot to be in when (implicitly) asking for the reigns to the future.
4.5 In this awesome paper, Evan Williams argues that hedging against individual possible moral catastrophes is quite difficult. In this case, it appears to me that we can still hedge here, and we should, given our position of influence.
5. Intuitively, diversifying across a range of activities that might be valuable seems useful. So having some things that are 0.01% chance of avoiding x-risk, some things that are 10% chance of reducing animal suffering, some things that are 95% chance of reducing malaria deaths, and 50% chance of reducing the number of animals suffering on factory farms. I need to write out my thoughts on this in more detail, but I think it’s useful to diversify across {chance of having any impact at all}, and not eating animals is a place where we can be pretty sure we’re having an impact in the long term.
On 5, diet change seems very very unlikely to make a difference on an individual level, because of how large the markets are. I think we’re (possibly much) more likely to make a difference through careers and donations. Maybe we have more robust estimates of the expected effects of diet (on farmed animals, at least) than these other things, though. Diversification/hedging seems valuable to me with deep uncertainty or moral uncertainty.