What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view.
This criticism seems unfair to me:
It seems applicable to any type of advocacy. Those who promote global health and poverty are likely biased toward foreign people. Those who promote longtermism are likely biased toward future people. Those who advocate for effective philanthropy are likely biased toward effectiveness and/or philanthropy.
There’s no effective counter-argument since, almost by definition, any engagement is possibly biased. If one responds with, “I don’t think I’m biased because I didn’t have these views to begin with,” the response can always be, “Well, you engaged in this topic and had a positive response, so surely, you must be biased somehow because most people don’t engage at all.” It seems then that only criticisms of the field are valid.
This is reminiscent of an ad hominem attack. Instead of engaging in the merits of the argument, the critique tars the person instead.
Even if the criticism is valid, what is to be done? Likely nothing as it’s unclear what the extent of the bias would be anyway. Surely, we wouldn’t want to silence discussion of the topic. So just as we support free speech regardless of people’s intentions and biases, we should support any valid arguments within the EA community. If one is unhappy with the arguments, the response should be to engage with them and make valid counterarguments, not speculate on people’s initial intuitions or motivations.
There’s no effective counter-argument since, almost by definition, any engagement is possibly biased. If one responds with, “I don’t think I’m biased because I didn’t have these views to begin with,” the response can always be, “Well, you engaged in this topic and had a positive response, so surely, you must be biased somehow because most people don’t engage at all.”
I’m going to simplify a bit to make this easier to talk about, but imagine a continuum in how much people start off caring about animals, running from 0% (the person globally who values animals least) to 100% (values most). Learning that someone who started at 80% looked into things more and is now at 95% is informative, and someone who started at 50% and is now at 95% is more informative.
This isn’t “some people are biased and some aren’t” but “everyone is biased on lots of topics in lots of ways”. When people come to conclusions that point in the direction of their biases others should generally find that less convincing than then they come to ones that point in the opposite direction.
Even if the criticism is valid, what is to be done?
What I would be most excited about seeing is people who currently are skeptical that animals matter anywhere near as much as Rethink’s current best guess moral weights would suggest treat this as an important disagreement and don’t continue just ignoring animals in their cause prioritization. Then they’d have a reason to get into these weights that didn’t trace back to already thinking animals mattered a lot. I suspect they’d come to pretty different conclusions, based on making different judgement calls on what matters in assessing worth or how to interpret ambiguous evidence about what animals do or are capable of. Then I’d like to see an adversarial collaboration.
This criticism seems unfair to me:
It seems applicable to any type of advocacy. Those who promote global health and poverty are likely biased toward foreign people. Those who promote longtermism are likely biased toward future people. Those who advocate for effective philanthropy are likely biased toward effectiveness and/or philanthropy.
There’s no effective counter-argument since, almost by definition, any engagement is possibly biased. If one responds with, “I don’t think I’m biased because I didn’t have these views to begin with,” the response can always be, “Well, you engaged in this topic and had a positive response, so surely, you must be biased somehow because most people don’t engage at all.” It seems then that only criticisms of the field are valid.
This is reminiscent of an ad hominem attack. Instead of engaging in the merits of the argument, the critique tars the person instead.
Even if the criticism is valid, what is to be done? Likely nothing as it’s unclear what the extent of the bias would be anyway. Surely, we wouldn’t want to silence discussion of the topic. So just as we support free speech regardless of people’s intentions and biases, we should support any valid arguments within the EA community. If one is unhappy with the arguments, the response should be to engage with them and make valid counterarguments, not speculate on people’s initial intuitions or motivations.
I’m going to simplify a bit to make this easier to talk about, but imagine a continuum in how much people start off caring about animals, running from 0% (the person globally who values animals least) to 100% (values most). Learning that someone who started at 80% looked into things more and is now at 95% is informative, and someone who started at 50% and is now at 95% is more informative.
This isn’t “some people are biased and some aren’t” but “everyone is biased on lots of topics in lots of ways”. When people come to conclusions that point in the direction of their biases others should generally find that less convincing than then they come to ones that point in the opposite direction.
What I would be most excited about seeing is people who currently are skeptical that animals matter anywhere near as much as Rethink’s current best guess moral weights would suggest treat this as an important disagreement and don’t continue just ignoring animals in their cause prioritization. Then they’d have a reason to get into these weights that didn’t trace back to already thinking animals mattered a lot. I suspect they’d come to pretty different conclusions, based on making different judgement calls on what matters in assessing worth or how to interpret ambiguous evidence about what animals do or are capable of. Then I’d like to see an adversarial collaboration.