My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and Iām not claiming that Iāve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
Youāre right that the post doesnāt argue for my specific numbers on comparing animals and humans: theyāre inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these arenāt numbers Iāve chosen to get a specific outcome.
I also think these moral worth statements need more clarification
I phrased these as āaverting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?ā As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
eating animal products requires 6.125 beings to be tortured per year per American. I personally donāt think that is a worthwhile thing to cause.
This kind of argument has issues with demandingness. Hereās a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/ām and a 2br costs $3k/ām. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for āCost per outcome as good as averting the death of an individual under 5 ā AMFā). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice weāre willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying āX has harm and so we should not do itā turns into āif thereās anything that you donāt absolutely need, or anything you consume where thereās a slightly less harmful version, you must stopā.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort āI think that these folk X are worth less than these other folk Yā (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/ā10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side noteāI feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animalsā moral worth in these discussions. Most members of the public, myself included, arenāt experts in either moral philosophy nor animal sentience. And, we also know that most members of the public donāt view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying āmost people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morallyā. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/āor an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animalsā moral worth in these discussions
Letās say Iām trying to convince someone that they shouldnāt donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (āastronomical stakesā) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future donāt matter, though, this isnāt going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but itās likely that existential risk just isnāt a high priority by their values. Them saying they think thereās only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldnāt try to convince people to go vegan because diet is strongly cultural and trying to change peopleās diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence peopleās diet. On other questions, though, itās much harder to get evidence, and thatās where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(Iām still very curious what you think of my demandingness objection to your argument above)
I included the āI think thereās a very large chance they donāt matter at all, and that thereās just no one inside to sufferā out of transparency. The post doesnāt depend on it at all
I donāt see how that can be true. Surely the weightings you give would be radically different if you thought there was āsomeone inside to sufferā?
The post doesnāt depend on it, because the post is all conditional on animals mattering a nonzero amount (āto be safe Iāll assume they do [matter]ā).
My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and Iām not claiming that Iāve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
I included the āI think thereās a very large chance they donāt matter at all, and that thereās just no one inside to sufferā out of transparency. ( https://āāwww.facebook.com/āājefftk/āāposts/āā10100153860544072?comment_id=10100153864306532 ) The post doesnāt depend on it at all, and everything is conditional on animals mattering.
Youāre right that the post doesnāt argue for my specific numbers on comparing animals and humans: theyāre inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these arenāt numbers Iāve chosen to get a specific outcome.
I phrased these as āaverting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?ā As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
This kind of argument has issues with demandingness. Hereās a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/ām and a 2br costs $3k/ām. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for āCost per outcome as good as averting the death of an individual under 5 ā AMFā). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice weāre willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying āX has harm and so we should not do itā turns into āif thereās anything that you donāt absolutely need, or anything you consume where thereās a slightly less harmful version, you must stopā.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort āI think that these folk X are worth less than these other folk Yā (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/ā10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side noteāI feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animalsā moral worth in these discussions. Most members of the public, myself included, arenāt experts in either moral philosophy nor animal sentience. And, we also know that most members of the public donāt view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying āmost people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morallyā. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/āor an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
Letās say Iām trying to convince someone that they shouldnāt donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (āastronomical stakesā) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future donāt matter, though, this isnāt going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but itās likely that existential risk just isnāt a high priority by their values. Them saying they think thereās only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldnāt try to convince people to go vegan because diet is strongly cultural and trying to change peopleās diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence peopleās diet. On other questions, though, itās much harder to get evidence, and thatās where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(Iām still very curious what you think of my demandingness objection to your argument above)
I donāt see how that can be true. Surely the weightings you give would be radically different if you thought there was āsomeone inside to sufferā?
The post doesnāt depend on it, because the post is all conditional on animals mattering a nonzero amount (āto be safe Iāll assume they do [matter]ā).
If he thinks thereās no one inside to suffer, then itās worth sacrificing an infinite number of chickens for the convenience of one person.
These numbers are presumably based on the idea that chickens are their own, independent, semi-conscious beings.