I think this post is on the right track, the request for reasoning transparency especially so.
I personally worry about how weird effective altruism will seem to the outside world if we focus exclusively on topics that most people don’t think are very important. A sister comment argues that the average person’s revealed preference about the value of a hen’s life relative to a human’s is infinitesimal. Likewise, however much people say they worry about AI (as a proxy for longtermism, which isn’t really on people’s radar in general), in practice, it tends to be relatively low on their list of concerns, even among potential existential threats.
If our thinking takes us in weird directions, that’s not inherently a reason to shy away. But I think there’s something to be said for considering the implications of having increasingly niche opinions, priorities, and epistemology. A movement that’s a little more humble/agnostic about what the most important cause is might broadly be able to devote more resources, on net, to a wider range of causes, including the ones we think most important.
(For context I am a vegan who believes that animal welfare is broadly neglected—I recently wrote something on the case for veganism for domesticated dogs.)
Also worry about the weirdness. Ariel said themselves:
When I started as an EA, I found other EAs’ obsession with animal welfare rather strange. How could these people advocate for helping chickens over children in extreme poverty? I changed my mind for a few reasons.
This might not be realistic for Ariel, but it would have been ironic if this obsession was even greater and enough to cause Ariel to shy away from EA, so that they never contributed to shifting priorities more to animal welfare.
But I also agree this isn’t necessarily a reason to shy away. Being disingenuous about our personal priorities to seem more mainstream seems wrong—like a bait-and-switch or cult-like tactics of getting people in the door and introducing heavier stuff as they get more emotionally invested. I like the framing of being more humble/agnostic, but maybe we (speaking as individuals) need to be careful that is genuine epistemological humility and not an act.
100% agree. I think it is almost always better to be honest, even if that makes you look weird. If you are worried about optics, “oh yeah, we say this to get people in but we don’t really believe it” looks pretty bad.
I think that revealed preference can be misleading in this context, for reasons I outline here.
It’s not clear that people’s revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People’s revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren’t relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when assessing how much we should value animals (i.e. by taking into account folk moral weights) or how much the public are likely to support/oppose us allocating more aid to animals.
But on the specific question of how the public would react to us allocating more resources to animals: this seems like a directly tractable empirical question. i.e. it would be relatively straightforward through surveys/experiments to assess whether people would be more/less hostile towards if we spent a greater share on animals, or if we spent much more on the long run future vs supporting a more diverse portfolio, or more/less on climate change etc.
I think this post is on the right track, the request for reasoning transparency especially so.
I personally worry about how weird effective altruism will seem to the outside world if we focus exclusively on topics that most people don’t think are very important. A sister comment argues that the average person’s revealed preference about the value of a hen’s life relative to a human’s is infinitesimal. Likewise, however much people say they worry about AI (as a proxy for longtermism, which isn’t really on people’s radar in general), in practice, it tends to be relatively low on their list of concerns, even among potential existential threats.
If our thinking takes us in weird directions, that’s not inherently a reason to shy away. But I think there’s something to be said for considering the implications of having increasingly niche opinions, priorities, and epistemology. A movement that’s a little more humble/agnostic about what the most important cause is might broadly be able to devote more resources, on net, to a wider range of causes, including the ones we think most important.
(For context I am a vegan who believes that animal welfare is broadly neglected—I recently wrote something on the case for veganism for domesticated dogs.)
Also worry about the weirdness. Ariel said themselves:
This might not be realistic for Ariel, but it would have been ironic if this obsession was even greater and enough to cause Ariel to shy away from EA, so that they never contributed to shifting priorities more to animal welfare.
But I also agree this isn’t necessarily a reason to shy away. Being disingenuous about our personal priorities to seem more mainstream seems wrong—like a bait-and-switch or cult-like tactics of getting people in the door and introducing heavier stuff as they get more emotionally invested. I like the framing of being more humble/agnostic, but maybe we (speaking as individuals) need to be careful that is genuine epistemological humility and not an act.
100% agree. I think it is almost always better to be honest, even if that makes you look weird. If you are worried about optics, “oh yeah, we say this to get people in but we don’t really believe it” looks pretty bad.
I think that revealed preference can be misleading in this context, for reasons I outline here.
It’s not clear that people’s revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People’s revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren’t relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when assessing how much we should value animals (i.e. by taking into account folk moral weights) or how much the public are likely to support/oppose us allocating more aid to animals.
But on the specific question of how the public would react to us allocating more resources to animals: this seems like a directly tractable empirical question. i.e. it would be relatively straightforward through surveys/experiments to assess whether people would be more/less hostile towards if we spent a greater share on animals, or if we spent much more on the long run future vs supporting a more diverse portfolio, or more/less on climate change etc.
Though of course we also need to account for potential biases in the opposite direction as well.