Imagine the worst suffering a [dog, bird, fish (for example a salmon), shrimp, fly] can experience. Try to compare this suffering with the worst suffering a human can experience. How intense or severe do you think is the worst suffering of a [dog, bird, fish, shrimp, fly] for an hour compared to the worst suffering of a human for an hour?
The detailed results are here, including a histogram for birds:
Whether the answers to this question imply moral equivalence between humans and birds, though, depends on the assumption that the respondents are something close to hedonistic utilitarians, and I doubt they are? For example, if the survey had instead given questions specifically about moral weight (“how many birds would you need to be saving from an hour of intense suffering before you’d prioritize that over doing the same for a human”, etc) you’d have seen different answers.
I agree. It strongly depends on the framing of questions. For example, I asked people how strongly they value animal welfare compared to human welfare. Average: 70%. So in one interpretation, that means 1 chicken = 0.7 humans. But there is a huge difference between saving and not harming, and between ‘animal’ and ‘chicken’. Asking people how many bird or human lives to save, gives a very different answer than asking them how many birds or humans to harm. People could say that saving 1 human is the equivalent of saving a million birds, but that harming one human is the equivalent of harming only a few birds. And when they realize the bird is a chicken used for food, people get stuck and their answers go weird. Or ask people about their maximum willingness to pay to avoid an hour of human or chicken suffering, versus their minimum willingness to accept to add an hour of suffering: huge differences. (I conducted some unpublished surveys about this, and one published: https://www.tandfonline.com/doi/abs/10.1080/21606544.2022.2138980.) In short: in this area you can easily show that people give highly inconsistent answers depending on the formulations of the questions.
I would be surprised if most people had stronger views about moral theories than about the upshots for human-animal tradeoffs. I don’t think that most people come to their views about tradeoffs because of what they value, rather they come their views about value because of their views about tradeoffs.
The survey question was, in Dutch:
The detailed results are here, including a histogram for birds:
Whether the answers to this question imply moral equivalence between humans and birds, though, depends on the assumption that the respondents are something close to hedonistic utilitarians, and I doubt they are? For example, if the survey had instead given questions specifically about moral weight (“how many birds would you need to be saving from an hour of intense suffering before you’d prioritize that over doing the same for a human”, etc) you’d have seen different answers.
I agree. It strongly depends on the framing of questions. For example, I asked people how strongly they value animal welfare compared to human welfare. Average: 70%. So in one interpretation, that means 1 chicken = 0.7 humans. But there is a huge difference between saving and not harming, and between ‘animal’ and ‘chicken’. Asking people how many bird or human lives to save, gives a very different answer than asking them how many birds or humans to harm. People could say that saving 1 human is the equivalent of saving a million birds, but that harming one human is the equivalent of harming only a few birds. And when they realize the bird is a chicken used for food, people get stuck and their answers go weird. Or ask people about their maximum willingness to pay to avoid an hour of human or chicken suffering, versus their minimum willingness to accept to add an hour of suffering: huge differences. (I conducted some unpublished surveys about this, and one published: https://www.tandfonline.com/doi/abs/10.1080/21606544.2022.2138980.) In short: in this area you can easily show that people give highly inconsistent answers depending on the formulations of the questions.
I would be surprised if most people had stronger views about moral theories than about the upshots for human-animal tradeoffs. I don’t think that most people come to their views about tradeoffs because of what they value, rather they come their views about value because of their views about tradeoffs.