eating veg sits somewhere between “avoid intercontinental flights” and “donate to effective charities” in terms of expected impact, and I’m not sure where to draw the line between “altruistic actions that seem way too costly and should be discouraged” and “altruistic actions that seem a reasonable early step in one’s EA journey and should be encouraged”
Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people’s epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff
eating veg sits somewhere between “avoid intercontinental flights” and “donate to effective charities” in terms of expected impact, and I’m not sure where to draw the line between “altruistic actions that seem way too costly and should be discouraged” and “altruistic actions that seem a reasonable early step in one’s EA journey and should be encouraged”
I am very confused by this statement. I feel like we’ve generally universally agreed that we don’t really encourage people as a community to take altruistic actions if we don’t really think it competes with the best alternatives that person has. Almost all altruistic interventions lie between “avoid intercontinental flights” and “donate to effective charities”, and indeed, we encourage ~0% of that range for participants of the EA community. So just based on that observation, our prior should clearly also tend towards being unopinionated on this topic.
On this principle, why would the answer here be different than our answer on whether you should locate your company in a place with higher-tax burden because it sets a good example of cooperating on global governance? Or whether to buy products that are the result of exploitative working practices? Or buy from companies with bad security practices? Or volunteer at your local homeless shelter? All of these are in effectiveness between “avoid intercontinental flights” and “donate to effective charities”, as far as I can tell (unless you think that “avoiding intercontinental flights” is somehow a much better value proposition, when it seems like one of the least cost-effective interventions I can think of, given the extremely high cost of avoiding intercontinental flights).
Yeah, the “avoid interncontinental flights” was intended as something clearly ineffective that people still do – i.e. as an example of something that seems way too costly and should be discouraged. So I fully agree with you we should encourage ~0% of that range for EAs.
My point is that avoiding animal products is substantially more cost-effective than those interventions, but I’m still not sure whether it meets the threshold for EA activity, but it might. It’s been a while since I looked into the exact numbers, but I think you can avert substantial time spent by animals on factory farms by avoiding animal products, and that seems a lot better than the other examples you gave, and perhaps better than donating to effective global health charities.
I’m not very convinced of your second point (though I could be—curious to hear why it feels true for you). I don’t currently see why you think the bolded words instead of: “it seems harder to see the importance of future beings or think correctly about the badness of existential risk while wasting time eating non-meat”
It feels like a universally compelling argument, or at least, I don’t see where you think the argument should stop applying on a spectrum between something like “it seems hard to think correctly about x-risk without having a career in it” and “it seems hard to think correctly about the importance of all sentient beings while squashing dust mites every time you sleep”
ETA: I imagine you wrote the bolded words because they feel true to you i.e. that eating meat might cause you to value drift or have worse epistemics in certain ways such that it’s worth staying vegan. I am curious about what explicable arguments that feeling (if you have it) might be tracking (e.g. in case they cause me to stay vegan).
I don’t think your third paragraph describes what I think / feel. It’s more the other way around: I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don’t eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn’t enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn’t change that. So I’m actually not sure if my argument supports me remaining a vegan now, but I think it’s a strong argument for me to go vegan in the first place at some point.
My guess is that a lot of people don’t actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they’re less visceral, more indirect/accidental ways of hurting animals.
I think it’s plausible that being more deliberate in your diet to avoid the lowest welfare options could have a lot of the same impact on your own perceptions of animals.
That being said, eating meat again would feel wrong to me, too. I specifically work on animal welfare. How can I eat those I’m trying to help?
Similarly, I’m a bit suspicious of non-vegetarian veterinarians helping farmed animals. If working with farmed animals doesn’t turn them away from meat, do they actually have their best interests at heart? What kind of doctor eats their patients?
And maybe this logic extends to those weighing the interests of nonhuman animals or similarly minded artificial sentience in the future.
I’m still not very convinced of your original point, though—when I simulate myself becoming non-vegan, I don’t imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you’re vegan right now.
I don’t feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.
I’ll tap out of the conversation now – don’t feel like I have time to discuss further, sorry.
I think underweighting the interests of animals and future beings with similar cognitive capacities is more likely to cause you to end up working on the wrong interventions or causes than is being roughly uniformly slightly less productive because you spend more time on veg food, and the risk of working on the wrong things could be more important than the small loss of productivity. Differences between interventions and causes can be pretty large. However, this isn’t obvious, and it could go the other way. And maybe going veg*n causes someone to underweight the far future or the less measurable relative to the near term or more measurable.
Could you expand on what effects eating meat would have on thinking about s-risks and other AI stuff? What kinds of scenarios are you thinking of?
My initial reaction is somewhat sceptical. I think these effects are hard to assess and could go either way. But it depends a bit on what mechanisms you have in mind.
Nobody actively wants factory farming to happen, but it’s the cheapest way to get something we want (i.e. meat), and we’ve built a system where it’s really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there’s something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it’s really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don’t consider these downside risks seem pretty naïve.
Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.
Copying a comment I once wrote:
eating veg sits somewhere between “avoid intercontinental flights” and “donate to effective charities” in terms of expected impact, and I’m not sure where to draw the line between “altruistic actions that seem way too costly and should be discouraged” and “altruistic actions that seem a reasonable early step in one’s EA journey and should be encouraged”
Intuitively and anecdotally (and based on some likely-crappy papers), it seems harder to see animals as sentient beings or think correctly about the badness of factory farming while eating meat; this form of motivated reasoning plausibly distorts most people’s epistemics, and this is about a pretty important part of the world, and recognizing the badness of factory farming has minor implications for s-risks and other AI stuff
I am very confused by this statement. I feel like we’ve generally universally agreed that we don’t really encourage people as a community to take altruistic actions if we don’t really think it competes with the best alternatives that person has. Almost all altruistic interventions lie between “avoid intercontinental flights” and “donate to effective charities”, and indeed, we encourage ~0% of that range for participants of the EA community. So just based on that observation, our prior should clearly also tend towards being unopinionated on this topic.
On this principle, why would the answer here be different than our answer on whether you should locate your company in a place with higher-tax burden because it sets a good example of cooperating on global governance? Or whether to buy products that are the result of exploitative working practices? Or buy from companies with bad security practices? Or volunteer at your local homeless shelter? All of these are in effectiveness between “avoid intercontinental flights” and “donate to effective charities”, as far as I can tell (unless you think that “avoiding intercontinental flights” is somehow a much better value proposition, when it seems like one of the least cost-effective interventions I can think of, given the extremely high cost of avoiding intercontinental flights).
Yeah, the “avoid interncontinental flights” was intended as something clearly ineffective that people still do – i.e. as an example of something that seems way too costly and should be discouraged. So I fully agree with you we should encourage ~0% of that range for EAs.
My point is that avoiding animal products is substantially more cost-effective than those interventions, but I’m still not sure whether it meets the threshold for EA activity, but it might. It’s been a while since I looked into the exact numbers, but I think you can avert substantial time spent by animals on factory farms by avoiding animal products, and that seems a lot better than the other examples you gave, and perhaps better than donating to effective global health charities.
I’m not very convinced of your second point (though I could be—curious to hear why it feels true for you). I don’t currently see why you think the bolded words instead of: “it seems harder to see the importance of future beings or think correctly about the badness of existential risk while wasting time eating non-meat”
It feels like a universally compelling argument, or at least, I don’t see where you think the argument should stop applying on a spectrum between something like “it seems hard to think correctly about x-risk without having a career in it” and “it seems hard to think correctly about the importance of all sentient beings while squashing dust mites every time you sleep”
ETA: I imagine you wrote the bolded words because they feel true to you i.e. that eating meat might cause you to value drift or have worse epistemics in certain ways such that it’s worth staying vegan. I am curious about what explicable arguments that feeling (if you have it) might be tracking (e.g. in case they cause me to stay vegan).
I don’t think your third paragraph describes what I think / feel. It’s more the other way around: I used to eat a lot of meat, and once I stopped doing that, I started seeing animals with different eyes (treating them as morally relevant, and internalizing that a lot more). The reason why I don’t eat meat now is not that I think it would cause value drift, but that it would make me deeply sad and upset – eating meat would feel similar to owning slaves that I treat poorly, or watching a gladiator fight for my own amusement. It just feels deeply morally wrong and isn’t enjoyable anymore. The fact that the consequences are only mildly negative in the grand scheme of things doesn’t change that. So I’m actually not sure if my argument supports me remaining a vegan now, but I think it’s a strong argument for me to go vegan in the first place at some point.
My guess is that a lot of people don’t actually see animals as sentient beings whose emotions and feelings matter a great deal, but more like cute things to have fun with. And anecdotally, how someone perceives animals seems to be determined by whether they eat them, not the other way around. (Insert plausible explanation – cognitive dissonance, rationalizations, etc.) I think squashing dust mites, drinking milk, eating eggs etc. seems to have a much less strong effect in comparison to eating meat, presumably because they’re less visceral, more indirect/accidental ways of hurting animals.
I think it’s plausible that being more deliberate in your diet to avoid the lowest welfare options could have a lot of the same impact on your own perceptions of animals.
That being said, eating meat again would feel wrong to me, too. I specifically work on animal welfare. How can I eat those I’m trying to help?
Similarly, I’m a bit suspicious of non-vegetarian veterinarians helping farmed animals. If working with farmed animals doesn’t turn them away from meat, do they actually have their best interests at heart? What kind of doctor eats their patients?
And maybe this logic extends to those weighing the interests of nonhuman animals or similarly minded artificial sentience in the future.
That makes sense, yeah. And I could see this being costly enough such that it’s best to continue avoiding meat.
I’m still not very convinced of your original point, though—when I simulate myself becoming non-vegan, I don’t imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that, if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).
Yeah, as I tried to explain above (perhaps it was too implicit), I think it probably matters much more whether you went vegan at some point in your life than whether you’re vegan right now.
I don’t feel confident in this; I wanted to mainly offer it as a hypothesis that could be tested further. I also mentioned the existence of crappy papers that support my perspective (you can probably find them in 5 minutes on Google Scholar). If people thought this was important, they could investigate this more.
I’ll tap out of the conversation now – don’t feel like I have time to discuss further, sorry.
I think underweighting the interests of animals and future beings with similar cognitive capacities is more likely to cause you to end up working on the wrong interventions or causes than is being roughly uniformly slightly less productive because you spend more time on veg food, and the risk of working on the wrong things could be more important than the small loss of productivity. Differences between interventions and causes can be pretty large. However, this isn’t obvious, and it could go the other way. And maybe going veg*n causes someone to underweight the far future or the less measurable relative to the near term or more measurable.
Could you expand on what effects eating meat would have on thinking about s-risks and other AI stuff? What kinds of scenarios are you thinking of?
My initial reaction is somewhat sceptical. I think these effects are hard to assess and could go either way. But it depends a bit on what mechanisms you have in mind.
Quickly written:
Nobody actively wants factory farming to happen, but it’s the cheapest way to get something we want (i.e. meat), and we’ve built a system where it’s really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
In the context of AI, suffering subroutines might be an example of that.
Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there’s something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it’s really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don’t consider these downside risks seem pretty naïve.
Thanks.
Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.