My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I’m not claiming that I’ve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
You’re right that the post doesn’t argue for my specific numbers on comparing animals and humans: they’re inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren’t numbers I’ve chosen to get a specific outcome.
I also think these moral worth statements need more clarification
I phrased these as “averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?” As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
eating animal products requires 6.125 beings to be tortured per year per American. I personally don’t think that is a worthwhile thing to cause.
This kind of argument has issues with demandingness. Here’s a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for “Cost per outcome as good as averting the death of an individual under 5 — AMF”). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we’re willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying “X has harm and so we should not do it” turns into “if there’s anything that you don’t absolutely need, or anything you consume where there’s a slightly less harmful version, you must stop”.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)
I included the “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer” out of transparency. The post doesn’t depend on it at all
I don’t see how that can be true. Surely the weightings you give would be radically different if you thought there was “someone inside to suffer”?
My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I’m not claiming that I’ve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
I included the “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer” out of transparency. ( https://www.facebook.com/jefftk/posts/10100153860544072?comment_id=10100153864306532 ) The post doesn’t depend on it at all, and everything is conditional on animals mattering.
You’re right that the post doesn’t argue for my specific numbers on comparing animals and humans: they’re inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren’t numbers I’ve chosen to get a specific outcome.
I phrased these as “averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?” As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
This kind of argument has issues with demandingness. Here’s a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for “Cost per outcome as good as averting the death of an individual under 5 — AMF”). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we’re willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying “X has harm and so we should not do it” turns into “if there’s anything that you don’t absolutely need, or anything you consume where there’s a slightly less harmful version, you must stop”.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)
I don’t see how that can be true. Surely the weightings you give would be radically different if you thought there was “someone inside to suffer”?
The post doesn’t depend on it, because the post is all conditional on animals mattering a nonzero amount (“to be safe I’ll assume they do [matter]”).
If he thinks there’s no one inside to suffer, then it’s worth sacrificing an infinite number of chickens for the convenience of one person.
These numbers are presumably based on the idea that chickens are their own, independent, semi-conscious beings.