I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)