I’m quite baffled by the argument that, because giving to charity or changing career can do more good than dietary change, this then means it’s permissible or even advisable to avoid dietary change. Relative values are entirely irrelevant. In my opinion the absolute consequentialist value of being ve*an is still considerable, and it is this absolute value that ultimately matters.
Usually we think of saving one human life, or saving one life from severe suffering, to be an incredibly valuable thing to do—and it is. Why shouldn’t this be the case for farm animals? Going ve*an will impact far more than just one animal’s life anyway—Brian Tomasik estimates that “avoiding eating one chicken or fish roughly translates to one less chicken or fish raised and killed”. It’s also worth noting that over 99% of farm animals in the USA live on factory farms.
There are strong consequentialist reasons for going ve*an other than the direct effects on the animals we eat which are well-covered here. One of the most important in my opinion is that you can influence others to change their diet and generally spread concern for animals and expand our moral circle. We need a society that stops seeing animals as objects to reduce the chances of s-risks, where vast amounts of suffering are locked in. How can we care about digital sentience when we don’t even care about cows?
They seem relevant because willpower and attention budgets are limited, and our altruism-directed activities (and habits, etc.) draw from those budgets.
One of the most important in my opinion is that you can influence others to change their diet and generally spread concern for animals and expand our moral circle. We need a society that stops seeing animals as objects to reduce the chances of s-risks, where vast amounts of suffering are locked in. How can we care about digital sentience when we don’t even care about cows?
I concede that this argument goes through probabilistically, but I feel like people overestimate its effect.
Almost none of the non-vegetarian EAs would want to lock in animal suffering for the long-term future, so the argument that personal veg*ism makes a difference on s-risks is a bit conjunctive. It seems to rely on the hidden premise that humans will attain control over the future, but EA values will die out or only have a negligible effect. That’s possible, but it doesn’t rank among the scenarios I’d consider likely.
I think the trajectory of civilization will gravitate toward one of two attractors:
(1) People’s “values” will become less and less relevant as Moloch dynamics accelerate
(2) People’s “values” will be more in control than ever before
If (1) happens, it doesn’t matter in the long run what people value today.
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
You may say “but why assume compromise instead of war or value-assimilation where minority values die out?”
Okay, those are possibilities. But like I said, it makes the claim more conjunctive.
Also, there are some reasons to expect altruistic values to outcompete self-oriented ones. (Note that this blog post was written before Open Phil, before FTX, etc.) (Relatedly, we can see that, outside of EA, most people don’t seem to care or recognize how difficult it is for humans to attain control over the long-term future. )
Maybe we live in an unlucky world where some kind of AI-aided stable totalitarianism is easy to bring about (in the sense that it doesn’t require unusual degrees of organizational competence or individual rationality, but people can”stumble” into a series technological inventions that opens the door to it). Still, in that world, there are again some non-obvious steps from “slightly increasing the degree the average Westerner cares about nonhuman animals” to “preventing AI-aided dictatorship with bad values.” Spreading concern for nonhuman suffering likely has a positive effect here, but it looks unlikely to be very important compared to other interventions. Conditioning on that totalitarian lock-in scenario, it seems more directly useful to promote norms around personal integrity to prevent people with dictatorial tendencies from attaining positions of influence/power or working on AI governance.
I think there are s-risks we can tractably address, but I see the biggest risks around failure modes of transformative AI (technical problems rather than problems with people’s values).
Among interventions around moral circle expansion, I’m most optimistic about addressing risks of polarization – preventing that concern for the whole cause area becomes “something associated with the out-group,” something that people look down on for various reasons. (For instance, I didn’t like this presentation.) In my ideal scenario, all the non-veg*an EAs would often put in a good word for the intentions behind vegetarianism or veganism and emphasize agreement with the view that sentient minds deserve our care. (I largely see this happening already.)
(Arguably, personal veganism or vegetarianism are great ways to prevent concern for nonhumans from becoming “something associated with the out-group.” [Esp. if the people who go veg don’t promote their diets as an ideology in an off-putting fashion – otherwise it can backfire.] )
(2) People’s “values” will be more in control than ever before
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
Crucially, factory farming is the result of economic incentives and technological feasibility, not of human malice or bad intentions. Most humans don’t approve of animal suffering per se – getting tasty food incidentally happens to involve animal suffering.4 In other words, technological capacity plus indifference is already enough to cause unimaginable amounts of suffering. This should make us mindful of the possibility that future technologies might lead to a similar moral catastrophe.
...
Comparable to how large numbers of nonhuman animals were created because it was economically expedient, it is conceivable that large numbers of artificial minds will be created in the future. They will likely enjoy various advantages over biological minds, which will make them economically useful. This combination of large numbers of sentient minds and foreseeable lack of moral consideration presents a severe s-risk. In fact, these conditions look strikingly similar to those of factory farming.
Overall I’m worried our values may not improve as fast our technology.
In general, I don’t think that relative values are irrelevant. Speaking in entirely abstract terms, if the value of doing one thing A is extremely much higher than that of doing another thing B, and both A and B are somewhat costly, then it might be reasonable not to do B and focus your mental energy on doing A well. It seems to me that EAs are using such reasoning in other circumstances.
Of course, what to do in any specific case depends on the empirics of that case. This is just to say that relative values aren’t generally irrelevant.
Fair point. In the case of being ve*an I think relative value is mostly irrelevant because dietary change shouldn’t preclude you from the other high impact actions (career change or donating money). In other words there’s no direct opportunity cost of dietary change because we have to eat, we just choose to eat something else.
If going ve*an is sufficiently inconvenient for an individual to the extent that it substantially inhibits their work productivity then your point is valid, but it really shouldn’t be. If anything my transition to veganism improved my productivity via health benefits. Personally I’m not at all inconvenienced by having to find vegan options given that I live in London where options are plentiful (although I appreciate this isn’t the case for everyone).
Seems like there are clear time/money costs? As a simple example, if you get coffee from Starbucks every day, switching from regular milk to plant-based milk for coffee could cost $0.50 per day—maybe you’d do better by saving that $0.50 and donating an extra $100 every year.
I’m not convinced ve*isn costs more overall. It think it can cost a lot less as fruit, veg, lentils, beans, nuts etc. are generally very cheap whereas meat is quite expensive? This research finds vegan meals are generally 40% cheaper than meat and fish counterparts.
As for time costs these are negligible/zero for me now.
There are two kinds of vegan though, and most of us want to be the fancy kind.
> There are a small number of vegan protein options that are cheaper than the animal-based equivalent, and then there are a wide variety of ones that are more expensive. If you build your diet from the first category it’s cheap and environmentally sustainable, but the limited choices mean most people won’t find it as enjoyable as what they were eating. On the other hand, the second category offers enough options to suit most palates but it costs more.
Financial advice company Cleo found that, after three months on the diet, meat eaters who go vegan end up spending £21 less per month on eating out and groceries.
However, vegetarians who opted to go vegan ended up spending £11 more per month.
So generally vegetarianism seems to be the cheapest diet, followed by veganism followed by meat eating.
I’m quite baffled by the argument that, because giving to charity or changing career can do more good than dietary change, this then means it’s permissible or even advisable to avoid dietary change. Relative values are entirely irrelevant. In my opinion the absolute consequentialist value of being ve*an is still considerable, and it is this absolute value that ultimately matters.
Usually we think of saving one human life, or saving one life from severe suffering, to be an incredibly valuable thing to do—and it is. Why shouldn’t this be the case for farm animals? Going ve*an will impact far more than just one animal’s life anyway—Brian Tomasik estimates that “avoiding eating one chicken or fish roughly translates to one less chicken or fish raised and killed”. It’s also worth noting that over 99% of farm animals in the USA live on factory farms.
There are strong consequentialist reasons for going ve*an other than the direct effects on the animals we eat which are well-covered here. One of the most important in my opinion is that you can influence others to change their diet and generally spread concern for animals and expand our moral circle. We need a society that stops seeing animals as objects to reduce the chances of s-risks, where vast amounts of suffering are locked in. How can we care about digital sentience when we don’t even care about cows?
They seem relevant because willpower and attention budgets are limited, and our altruism-directed activities (and habits, etc.) draw from those budgets.
I concede that this argument goes through probabilistically, but I feel like people overestimate its effect.
Almost none of the non-vegetarian EAs would want to lock in animal suffering for the long-term future, so the argument that personal veg*ism makes a difference on s-risks is a bit conjunctive. It seems to rely on the hidden premise that humans will attain control over the future, but EA values will die out or only have a negligible effect. That’s possible, but it doesn’t rank among the scenarios I’d consider likely.
I think the trajectory of civilization will gravitate toward one of two attractors: (1) People’s “values” will become less and less relevant as Moloch dynamics accelerate (2) People’s “values” will be more in control than ever before
If (1) happens, it doesn’t matter in the long run what people value today.
If (2) happens, any positive concern for the welfare of nonhumans will likely go far. For instance, in a world where it’s technologically easy to give every person what they want without side effects, even just 10% of the population being concerned about nonhuman welfare could achieve the goal of society not causing harm to animals (or digital minds) via compromise.
You may say “but why assume compromise instead of war or value-assimilation where minority values die out?”
Okay, those are possibilities. But like I said, it makes the claim more conjunctive.
Also, there are some reasons to expect altruistic values to outcompete self-oriented ones. (Note that this blog post was written before Open Phil, before FTX, etc.) (Relatedly, we can see that, outside of EA, most people don’t seem to care or recognize how difficult it is for humans to attain control over the long-term future. )
Maybe we live in an unlucky world where some kind of AI-aided stable totalitarianism is easy to bring about (in the sense that it doesn’t require unusual degrees of organizational competence or individual rationality, but people can”stumble” into a series technological inventions that opens the door to it). Still, in that world, there are again some non-obvious steps from “slightly increasing the degree the average Westerner cares about nonhuman animals” to “preventing AI-aided dictatorship with bad values.” Spreading concern for nonhuman suffering likely has a positive effect here, but it looks unlikely to be very important compared to other interventions. Conditioning on that totalitarian lock-in scenario, it seems more directly useful to promote norms around personal integrity to prevent people with dictatorial tendencies from attaining positions of influence/power or working on AI governance.
I think there are s-risks we can tractably address, but I see the biggest risks around failure modes of transformative AI (technical problems rather than problems with people’s values).
Among interventions around moral circle expansion, I’m most optimistic about addressing risks of polarization – preventing that concern for the whole cause area becomes “something associated with the out-group,” something that people look down on for various reasons. (For instance, I didn’t like this presentation.) In my ideal scenario, all the non-veg*an EAs would often put in a good word for the intentions behind vegetarianism or veganism and emphasize agreement with the view that sentient minds deserve our care. (I largely see this happening already.)
(Arguably, personal veganism or vegetarianism are great ways to prevent concern for nonhumans from becoming “something associated with the out-group.” [Esp. if the people who go veg don’t promote their diets as an ideology in an off-putting fashion – otherwise it can backfire.] )
Thanks for this thoughtful response.
Hmm this just feels a bit hopeful to me. We may well move into this attractor state, but what if we lock-in suffering (not necessarily forever maybe just for a long time) before this point? The following paragraphs from the Center for Reducing Suffering’s page on S-risks concern me:
...
Overall I’m worried our values may not improve as fast our technology.
In general, I don’t think that relative values are irrelevant. Speaking in entirely abstract terms, if the value of doing one thing A is extremely much higher than that of doing another thing B, and both A and B are somewhat costly, then it might be reasonable not to do B and focus your mental energy on doing A well. It seems to me that EAs are using such reasoning in other circumstances.
Of course, what to do in any specific case depends on the empirics of that case. This is just to say that relative values aren’t generally irrelevant.
Fair point. In the case of being ve*an I think relative value is mostly irrelevant because dietary change shouldn’t preclude you from the other high impact actions (career change or donating money). In other words there’s no direct opportunity cost of dietary change because we have to eat, we just choose to eat something else.
If going ve*an is sufficiently inconvenient for an individual to the extent that it substantially inhibits their work productivity then your point is valid, but it really shouldn’t be. If anything my transition to veganism improved my productivity via health benefits. Personally I’m not at all inconvenienced by having to find vegan options given that I live in London where options are plentiful (although I appreciate this isn’t the case for everyone).
Seems like there are clear time/money costs? As a simple example, if you get coffee from Starbucks every day, switching from regular milk to plant-based milk for coffee could cost $0.50 per day—maybe you’d do better by saving that $0.50 and donating an extra $100 every year.
I’m not convinced ve*isn costs more overall. It think it can cost a lot less as fruit, veg, lentils, beans, nuts etc. are generally very cheap whereas meat is quite expensive? This research finds vegan meals are generally 40% cheaper than meat and fish counterparts.
As for time costs these are negligible/zero for me now.
There are two kinds of vegan though, and most of us want to be the fancy kind.
> There are a small number of vegan protein options that are cheaper than the animal-based equivalent, and then there are a wide variety of ones that are more expensive. If you build your diet from the first category it’s cheap and environmentally sustainable, but the limited choices mean most people won’t find it as enjoyable as what they were eating. On the other hand, the second category offers enough options to suit most palates but it costs more.
But the research I linked to indicated that vegans generally spend less.
Also this news story cites research that says the following:
So generally vegetarianism seems to be the cheapest diet, followed by veganism followed by meat eating.