I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animalsā moral worth in these discussions
Letās say Iām trying to convince someone that they shouldnāt donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (āastronomical stakesā) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future donāt matter, though, this isnāt going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but itās likely that existential risk just isnāt a high priority by their values. Them saying they think thereās only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldnāt try to convince people to go vegan because diet is strongly cultural and trying to change peopleās diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence peopleās diet. On other questions, though, itās much harder to get evidence, and thatās where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(Iām still very curious what you think of my demandingness objection to your argument above)
Letās say Iām trying to convince someone that they shouldnāt donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (āastronomical stakesā) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future donāt matter, though, this isnāt going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but itās likely that existential risk just isnāt a high priority by their values. Them saying they think thereās only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldnāt try to convince people to go vegan because diet is strongly cultural and trying to change peopleās diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence peopleās diet. On other questions, though, itās much harder to get evidence, and thatās where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(Iām still very curious what you think of my demandingness objection to your argument above)