The animal welfare side of things feels less truthseeking, more activist, than other parts of EA. Talk of “speciesim” that implies animals’ and humans’ lives are of ~equal value, seems farfetched to me. People frequently do things like taking Rethink’s moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.
If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above. But I’m also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions—both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA. Going all-in on animal welfare at the expense of global development seems bad for the movement.
That’s not what “speciesism” means. Speciesim isn’t the view that an individual human matters more than animals, it’s the view that humans matter more because they are human, and not because of some objectively important capacity. Singer who popularized the term speciesism (though he didn’t invent it) has never denied that a (typical, non-infant) human should be saved over a single animal.
Good to know! I haven’t actually read “Animal Liberation” or etc; I’ve just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant “thinking humans are superior to animals (which is bad and wrong)”, in the same way that racism is often used to mean “thinking europeans are superior to other groups (which is bad and wrong)”, and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals’ lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.
I guess your definition of speciesism means that it’s fine to think humans matter more than other animals, but only if there’s a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we’re Y percent more valuable, or because the strong are destined to rule, or whatever). Versus it would be speciesist to say that humans matter more than other animals “because they’re human, and I’m human, and I’m sticking with my tribe”.
Wikipedia’s page on “speciesism” (first result when I googled the word) is kind of confusing and suggests that people use the word in different ways, with some people using it the way I assumed, and others the way you outlined, or perhaps in yet other ways:
The term has several different definitions.[1] Some specifically define speciesism as discrimination or unjustified treatment based on an individual’s species membership,[2][3][4] while others define it as differential treatment without regard to whether the treatment is justified or not.[5][6] Richard D. Ryder, who coined the term, defined it as “a prejudice or attitude of bias in favour of the interests of members of one’s own species and against those of members of other species”.[7] Speciesism results in the belief that humans have the right to use non-human animals in exploitative ways which is pervasive in the modern society.[8][9][10] Studies from 2015 and 2019 suggest that people who support animal exploitation also tend to have intersectional bias that encapsulates and endorses racist, sexist, and other prejudicial views, which furthers the beliefs in human supremacy and group dominance to justify systems of inequality and oppression.
The 2nd result on a google search for the word, this Britannica article, sounds to me like it is supporting “my” definition:
Speciesism, in applied ethics and the philosophy of animal rights, the practice of treating members of one species as morally more important than members of other species; also, the belief that this practice is justified.
That makes it sound like anybody who thinks a human is more morally important than a shrimp, by definition is speciesist, regardless of their reasons. (Later on the article talks about something called Singer’s “principle of equal consideration of interests”. It’s unclear to me if this thought experiment is supposed to imply humans == shrimps, or if it’s supposed to be saying the IMO much more plausible idea that a given amount of pain-qualia is of equal badness whether it’s in a human or a shrimp. (So you could say something like—humans might have much more capacity for pain, making them morally more important overall, but every individual teaspoon of pain is the same badness, regardless of where it is.)
Third google result: this 2019 philosophy paper debating different definitions of the term—I’m not gonna read the whole thing, but its existence certainly suggests that people disagree. Looks like it ends up preferring to use your definition of speciesism, and uses the term “species-egalitarianists” for the hardline humans == shrimp position.
Fourth: Merriam-Webster, which has no time for all this philosophical BS (lol) -- speciesism is simply “prejudice or discrimination based on species”, and that’s that, apparently!
Fifth: this animal-ethics.org website—long page, and maybe it’s written in a sneaky way that actually permits multiple definitions? But at least based on skimming it, it seems to endorse the hardline position that not giving equal consideration to animals is like sexism or racism: “How can we oppose racism and sexism but accept speciesism?”—“A common form of speciesism that often goes unnoticed is the discrimination against very small animals.”—“But if intelligence cannot be a reason to justify treating some humans worse than others, it cannot be a reason to justify treating nonhuman animals worse than humans either.”
Sixth google result is PETA, who says “Speciesism is the human-held belief that all other animal species are inferior… It’s a bias rooted in denying others their own agency, interests, and self-worth, often for personal gain.” I actually expected PETA to be the most zealously hard-line here, but this page definitely seems to be written in a sneaky way that makes it sound like they are endorsing the humans == shrimp position, while actually being compatible with your more philosophically well-grounded definition. Eg, the website quickly backs off from the topic of humans-vs-animals moral worth, moving on to make IMO much more sympathetic points, like that it’s ridiculous to think farmed animals like pigs are less deserving of moral concern than pet animals like dogs. And they talk about how animals aren’t ours to simply do absolutely whatever we please with zero moral consideration of their interests (which is compatible with thinking that animals deserve some-but-not-equal consideration).
Anyways. Overall it seems like philosophers and other careful thinkers (such as the editors of the the EA Forum wiki) would like a minimal definition, wheras perhaps the more common real-world usage is the ill-considered maximal definition that I initially assumed it had. It’s unclear to me what the intention was behind the original meaning of the term—were early users of the word speciesism trying to imply that humans == shrimp and you’re a bad person if you disagree? Or were they making a more careful philosophical distinction, and then, presumably for activist purposes, just deliberately chose a word that was destined to lead to this confusion?
No offense meant to you, or to any of these (non-EA) animal activist sources that I just googled, but something about this messy situation is not giving me the best “truthseeking” vibes...
I’ve definitely heard speciesism used both ways, but I think it’s usually used without much reference to an exact view, but as a general “vibe” (which IMO makes it a not particularly useful word). But, I think people in the EA-side of the animal advocacy world tend to lean more toward the “it’s discriminatory to devalue animals purely because they aren’t a member of the human species” definition. I’d guess that most times its used, especially outside of EA, it’s something more like the “it’s discriminatory to not view all animals including humans as being of equal value” view but with a lot of fuzziness around it. So I’d guess it is somewhat context dependent on the speaker?
I share your impression that it’s often used differently in broader society and mainstream animal rights groups than it is by technical philosophers and in the EA space. I think the average person would still hear the word as akin to racism or sexism or some other -ism. By criticizing those isms, we DO in fact mean to imply that individual human beings are of equal moral value regardless of their race or sex. And by that standard, I’d be a proud speciesist, because I do think individual beings of some species are innately more valuable than others.
We can split hairs about why that is—capacity for love or pain or knowledge or neuron count or whatever else we find valuable about a life—but it will still require you to come out with a multiplier for how much more valuable a healthy “normal” human is relative to a healthy normal member of other species, which would be absolutely anathema in the racial or sexual context.
I don’t think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
“Going all-in on animal welfare at the expense of global development seems bad for the movement.” — I don’t think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isn’t $100 million a mere fraction of the yearly GHD budget?
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I’ve talked to about this stuff. However:
I don’t have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn’t overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I’ve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints—one is “animal-welfare EA sometimes seems too ‘activist’ in a non-truthseeking way”, and another is more like “I disagree with these folks about philosophical questions”. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other… who’s tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who’s doing it because they disagree with me about something akin to “experience size”, or the importance of sapience, or how good of an approximation it is to linearly “add up” positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it’s hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commenters—I wasn’t primarily thinking “man, this animal-welfare stuff is gonna tank EA’s reputation”, but rather “hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but ‘weird’ causes like AI safety and chicken wellbeing.”
I get that the question is only asking about $100m, which seems like it wouldn’t shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I’m glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area… if it weren’t for the pesky problem that it’s massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
I’m not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if you’re trading off against global health, then you’re losing something strongly positive.
“Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?”—well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things: - Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years? - Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging “pro-animal-welfare, anti-GHD” coalition who will return for next year’s question, “Should we shift $500m from GHD to animal welfare?”, and the question the year after that... I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I’d be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.)
an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility
This seems like a pretty natural thing to believe, but I’m not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that aren’t themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but I’m not sure how celebrated they are, though I’m sure they deserve it.)
I think philosophically it could be interesting whether if we were at 90% of neartermist EA funding going to animals whether we should move it all the way to 100%, but since this is very far from reality, I think practically we don’t need to think/worry much about ‘going all-in on animal welfare’.
I think the Rethink people were suitably circumspect about their conclusions and the assumptions they made, but yes probably others have taken some claims out of context.
Fwiw I think total hedonic utilitarianism is ‘ultimately correct’ (inasmuch as that statement means anything), but nonetheless strongly agree with everything else you say.
Excerpting from and expanding on a bit of point 1 of my reply to akash above. Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:
Something akin to “experience size” (very well-described by that recent blog post!)
The importance of sapience—if an experience of suffering is happening “all on its own”, floating adrift in the universe with nobody to think “I am suffering”, “I hope this will end soon”, etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences? Maybe it’s incoherent to ask questions like this, or I am thinking about this in totally the wrong way? But it seems like an important question to me. The similiarities between layers of “neurons” in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine “experience of vision” (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation. I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
How good of an approximation is it to linearly “add up” positive experiences when the experiences are near-identical? ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.
Something about “higher pleasures”, or Neitzcheanism, or the complexity of value, that maybe there’s more to life than just adding up positive and negative valence?? Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn’t want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach. I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very “happy” in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like… maximize the potential of the universe to know itself and explore the possibilities of consciousness. Or something...
I don’t have time to reply to all of these, but I think it’s worth saying re point 1, that inasmuch as hedonism ‘struggles’ with this, it’s because it’s basically the only axiology to commit to addressing it at all. I don’t consider that a weakness, since there clearly is some level of comparability between my stubbing my toe and my watching a firework.
Preference utilitarianism sort of ducks around this by equivocating between whether determining a preference requires understanding the happiness its satisfaction brings (in which case it has the same problem) or whether preferences rely on some even more mysterious forces with even weirder implications. I wrote much more on this equivocation here.
Also re size specifically, he literally says size ‘is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size. It’s common to joke about ‘hedons’, but I see no reason one should both be materialist and not expect to find some minimum physical unit of happiness in conscious entities. Then the more hedons an entity has, the sizier its happiness would be. It’s also possible that that we find multiple indivisible hedon-like objects, in which case the philosophy gets harder again gets harder (and at the very least, it’s going to be tough to have an objective weighting between hedons and antihedons, since there’s no a priori reason to assume it should be 1-to-1). But I don’t think hedonists should have to assume the latter, or prove that it’s not true.
People frequently do things like taking Rethink’s moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth
Can you point to specific cases of that happening? I haven’t seen this happen before. My sense is that most people who quote Rethinks moral weights project are familiar with the limitations.
The animal welfare side of things feels less truthseeking, more activist, than other parts of EA
Rethink’s weights unhedged in the wild: the most recent time I remember seeing this was when somebody pointed me towards this website: https://foodimpacts.org/, which uses Rethink’s numbers to set the moral importance of different animals. They only link to where they got the weights in a tiny footnote on a secondary page about methods, and they don’t mention any other ways that people try to calculate reference weights, or anything about what it means to “assume hedonism” or etc. Instead, we’re told these weights are authoritative and scientific because they’re “based on the most elaborate research to date”.
IMO it would be cool to be able to swap between Rethink, versus squared neuron count or something, versus everything-is-100%. As is, they do let you edit the numbers yourself, and also give a checkbox that makes everything equal 100%. Which (perhaps unintentionally) is a pretty extreme framing of the discussion!! “Are shrimp 3% as important as a human life (30 shrimp = 1 person)! Or 100%? Or maybe you want to edit the numbers to something in-between?”
I think the foodimpacts calculator is a cool idea, and I don’t begrudge anyone an attempt to make estimates using a bunch of made-up numbers (see the ACX post on this subject) -- indeed, I wish the calculator went more out on a limb by trying to include the human health impacts of various foods (despite the difficulties / uncertainties they mention on the “methods” page). But this is the kind of thing that I was talking about re: the weights.
Animal welfare feeling more activist & less truth-seeking:
But I think that post is probably accurate in the specific claims that it makes, and indeed vegan EA activism is part of overall animal welfare EA activism, so perhaps I could rest my case there.
I also think that the broader animal welfare space has a much milder version of a similar ailment. I am pretty “rationalist” and think that rationalist virtues (as expounded in Yudkowsky’s Sequences, or Slate Star Codex blog posts, or Secular Solstice celebrations, or just sites like OurWorldInData) are important. I think that global health places like Givewell do a pretty great job embodying these virtues, that longtermist stuff does a medium-good job (they’re trying! but it’s harder since the whole space is just more speculative), and animal welfare does a worse job (but still better than almost all mainstream institutions, eg way better than either US political party). Mostly I think this is just because a lot of people get into animal EA without ever first reading rationalist blogs (which is fine, not everybody has to be just like me); instead they sometimes find EA via Peter Singer’s more activist-y “Animal Liberation”, or via the yet-more-activist mainstream vegan movement or climate movements. And in stuff like climate protest movements (greta thurnberg, just stop oil, sunrise, etc), being maximally truthseeking and evenhanded just isn’t a top priority like it is in EA! Of course the people that come to EA from those movements are often coming specifically because they recognize that, and they prefer EA’s more rigorous / rationalist vibe. (Kinda like how when Californians move to Texas, they actually make Texas more republican and not more democrat, because California is very blue but Californians-who-choose-to-move-to-Texas are red.) But I still think that (unlike the CA/TX example?) the long-time overlap with those other activist movements makes animal welfare less rationalist and thereby less truthseeking than I like.
(Just to further caveat… Not scoring 100⁄100 on truthseekingness isn’t the end of the world. I love the idea of Charter Cities and support that movement, despite that some charter city advocates are pretty hype-y and use exaggerated rhetoric, and a few, like Balajis, regularly misrepresent things and feel like outright hustlers at times. As I said, I’d support animal welfare over GHD despite truthseeky concerns if that was my only beef; my bigger worries are some philosophical disagreements and concern about the relative lack of long-term / ripple effects.)
<<My sense is that most people who quote Rethinks moral weights project is familiar with the limitations.>>
Do you think that the people doing the quoting also fairly put the average Forum reader on notice of the limitations? That’s a different thing than being aware of the limitations themselves. I’d have to go back and do a bunch of reading of past posts to have a firm sense on this.
Talk of “speciesim” that implies animals’ and humans’ lives are of ~equal value, seems farfetched to me.
I have yet to hear someone defend that. So far, everytime I have heard this idea, it was from a speciesist person who failed to understand the implication of rejecting speciesism. Basically just as a strawman argument.
David Mathers makes a similar comment, and I respond, here. Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that’s preferred by smart philosophers. Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc). It’s unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it’s since been somewhat “sanewashed” by philosophers), or if (as I suspect ) it was mushy and unclear from the very start—invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.
The animal welfare side of things feels less truthseeking, more activist, than other parts of EA. Talk of “speciesim” that implies animals’ and humans’ lives are of ~equal value, seems farfetched to me. People frequently do things like taking Rethink’s moral weights project (which kinda skips over a lot of hard philosophical problems about measurement and what we can learn from animal behavior, and goes all-in on a simple perspective of total hedonic utilitarianism which I think is useful but not ultimately correct), and just treat the numbers as if they are unvarnished truth.
If I considered only the immediate, direct effects of $100m spent on animal welfare versus global health, I would probably side with animal welfare despite the concerns above. But I’m also worried about the relative lack of ripple / flow-through effects from animal welfare work versus global health interventions—both positive longer-term effects on the future of civilization generally, and more near-term effects on the sustainability of the EA movement and social perceptions of EA. Going all-in on animal welfare at the expense of global development seems bad for the movement.
That’s not what “speciesism” means. Speciesim isn’t the view that an individual human matters more than animals, it’s the view that humans matter more because they are human, and not because of some objectively important capacity. Singer who popularized the term speciesism (though he didn’t invent it) has never denied that a (typical, non-infant) human should be saved over a single animal.
Good to know! I haven’t actually read “Animal Liberation” or etc; I’ve just seen the word a lot and assumed (by the seemingly intentional analogy to racism, sexism, etc) that it meant “thinking humans are superior to animals (which is bad and wrong)”, in the same way that racism is often used to mean “thinking europeans are superior to other groups (which is bad and wrong)”, and sexism about men > women. Thus it always felt to me like a weird, unlikely attempt to shoehorn a niche philosophical position (Are nonhuman animals’ lives of equal worth to humans?) into the same kind of socially-enforced consensus whereby things like racism are near-universally condemend.
I guess your definition of speciesism means that it’s fine to think humans matter more than other animals, but only if there’s a reason for it (like that we have special quality X, or we have Y percent greater capacity for something, therefore we’re Y percent more valuable, or because the strong are destined to rule, or whatever). Versus it would be speciesist to say that humans matter more than other animals “because they’re human, and I’m human, and I’m sticking with my tribe”.
Wikipedia’s page on “speciesism” (first result when I googled the word) is kind of confusing and suggests that people use the word in different ways, with some people using it the way I assumed, and others the way you outlined, or perhaps in yet other ways:
The 2nd result on a google search for the word, this Britannica article, sounds to me like it is supporting “my” definition:
That makes it sound like anybody who thinks a human is more morally important than a shrimp, by definition is speciesist, regardless of their reasons. (Later on the article talks about something called Singer’s “principle of equal consideration of interests”. It’s unclear to me if this thought experiment is supposed to imply humans == shrimps, or if it’s supposed to be saying the IMO much more plausible idea that a given amount of pain-qualia is of equal badness whether it’s in a human or a shrimp. (So you could say something like—humans might have much more capacity for pain, making them morally more important overall, but every individual teaspoon of pain is the same badness, regardless of where it is.)
Third google result: this 2019 philosophy paper debating different definitions of the term—I’m not gonna read the whole thing, but its existence certainly suggests that people disagree. Looks like it ends up preferring to use your definition of speciesism, and uses the term “species-egalitarianists” for the hardline humans == shrimp position.
Fourth: Merriam-Webster, which has no time for all this philosophical BS (lol) -- speciesism is simply “prejudice or discrimination based on species”, and that’s that, apparently!
Fifth: this animal-ethics.org website—long page, and maybe it’s written in a sneaky way that actually permits multiple definitions? But at least based on skimming it, it seems to endorse the hardline position that not giving equal consideration to animals is like sexism or racism: “How can we oppose racism and sexism but accept speciesism?”—“A common form of speciesism that often goes unnoticed is the discrimination against very small animals.”—“But if intelligence cannot be a reason to justify treating some humans worse than others, it cannot be a reason to justify treating nonhuman animals worse than humans either.”
Sixth google result is PETA, who says “Speciesism is the human-held belief that all other animal species are inferior… It’s a bias rooted in denying others their own agency, interests, and self-worth, often for personal gain.” I actually expected PETA to be the most zealously hard-line here, but this page definitely seems to be written in a sneaky way that makes it sound like they are endorsing the humans == shrimp position, while actually being compatible with your more philosophically well-grounded definition. Eg, the website quickly backs off from the topic of humans-vs-animals moral worth, moving on to make IMO much more sympathetic points, like that it’s ridiculous to think farmed animals like pigs are less deserving of moral concern than pet animals like dogs. And they talk about how animals aren’t ours to simply do absolutely whatever we please with zero moral consideration of their interests (which is compatible with thinking that animals deserve some-but-not-equal consideration).
Anyways. Overall it seems like philosophers and other careful thinkers (such as the editors of the the EA Forum wiki) would like a minimal definition, wheras perhaps the more common real-world usage is the ill-considered maximal definition that I initially assumed it had. It’s unclear to me what the intention was behind the original meaning of the term—were early users of the word speciesism trying to imply that humans == shrimp and you’re a bad person if you disagree? Or were they making a more careful philosophical distinction, and then, presumably for activist purposes, just deliberately chose a word that was destined to lead to this confusion?
No offense meant to you, or to any of these (non-EA) animal activist sources that I just googled, but something about this messy situation is not giving me the best “truthseeking” vibes...
I’ve definitely heard speciesism used both ways, but I think it’s usually used without much reference to an exact view, but as a general “vibe” (which IMO makes it a not particularly useful word). But, I think people in the EA-side of the animal advocacy world tend to lean more toward the “it’s discriminatory to devalue animals purely because they aren’t a member of the human species” definition. I’d guess that most times its used, especially outside of EA, it’s something more like the “it’s discriminatory to not view all animals including humans as being of equal value” view but with a lot of fuzziness around it. So I’d guess it is somewhat context dependent on the speaker?
Ok, maybe I was too fast to take the definition I remember from undergrad 20 years ago as the only one in use!
I share your impression that it’s often used differently in broader society and mainstream animal rights groups than it is by technical philosophers and in the EA space. I think the average person would still hear the word as akin to racism or sexism or some other -ism. By criticizing those isms, we DO in fact mean to imply that individual human beings are of equal moral value regardless of their race or sex. And by that standard, I’d be a proud speciesist, because I do think individual beings of some species are innately more valuable than others.
We can split hairs about why that is—capacity for love or pain or knowledge or neuron count or whatever else we find valuable about a life—but it will still require you to come out with a multiplier for how much more valuable a healthy “normal” human is relative to a healthy normal member of other species, which would be absolutely anathema in the racial or sexual context.
A few quick pushbacks/questions:
I don’t think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
“Going all-in on animal welfare at the expense of global development seems bad for the movement.” — I don’t think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isn’t $100 million a mere fraction of the yearly GHD budget?
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I’ve talked to about this stuff. However:
I don’t have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn’t overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I’ve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints—one is “animal-welfare EA sometimes seems too ‘activist’ in a non-truthseeking way”, and another is more like “I disagree with these folks about philosophical questions”. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other… who’s tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who’s doing it because they disagree with me about something akin to “experience size”, or the importance of sapience, or how good of an approximation it is to linearly “add up” positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it’s hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commenters—I wasn’t primarily thinking “man, this animal-welfare stuff is gonna tank EA’s reputation”, but rather “hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but ‘weird’ causes like AI safety and chicken wellbeing.”
I get that the question is only asking about $100m, which seems like it wouldn’t shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I’m glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area… if it weren’t for the pesky problem that it’s massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
I’m not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if you’re trading off against global health, then you’re losing something strongly positive.
“Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?”—well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things:
- Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years?
- Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging “pro-animal-welfare, anti-GHD” coalition who will return for next year’s question, “Should we shift $500m from GHD to animal welfare?”, and the question the year after that...
I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I’d be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.)
This seems like a pretty natural thing to believe, but I’m not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that aren’t themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but I’m not sure how celebrated they are, though I’m sure they deserve it.)
I think philosophically it could be interesting whether if we were at 90% of neartermist EA funding going to animals whether we should move it all the way to 100%, but since this is very far from reality, I think practically we don’t need to think/worry much about ‘going all-in on animal welfare’.
I think the Rethink people were suitably circumspect about their conclusions and the assumptions they made, but yes probably others have taken some claims out of context.
Yeah, I wish they had clarified how many years the $100m is spread out over. See my point 3 in reply to akash above.
Fwiw I think total hedonic utilitarianism is ‘ultimately correct’ (inasmuch as that statement means anything), but nonetheless strongly agree with everything else you say.
Excerpting from and expanding on a bit of point 1 of my reply to akash above. Here are four philosophical areas where I feel like total hedonic utilitarianism (as reflected in common animal-welfare calculations) might be missing the mark:
Something akin to “experience size” (very well-described by that recent blog post!)
The importance of sapience—if an experience of suffering is happening “all on its own”, floating adrift in the universe with nobody to think “I am suffering”, “I hope this will end soon”, etc, does this make the suffering experience worse-than, or not-as-bad-as, human suffering where the experience is tied together with a rich tapestry of other conscious experiences? Maybe it’s incoherent to ask questions like this, or I am thinking about this in totally the wrong way? But it seems like an important question to me. The similiarities between layers of “neurons” in image-classifying AIs, and the actual layouts of literal neurons in the human retina + optical cortex (both humans and AIs have a layer for initial inputs, then for edge-detection, then for corners and curves, then simple shapes and textures, then eventually for higher concepts and whole objects) makes me think that possibly image-classifiers are having a genuine “experience of vision” (ie qualia), but an experience that is disconnected (of course) from any sense of self or sense of wellbeing-vs-suffering or wider understanding of its situation. I think many animals might have experiences that are intermediate in various ways between humans and this hypothetical isolated-experience-of-vision that might be happening in an AI image classifier.
How good of an approximation is it to linearly “add up” positive experiences when the experiences are near-identical? ie, there are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.
Something about “higher pleasures”, or Neitzcheanism, or the complexity of value, that maybe there’s more to life than just adding up positive and negative valence?? Personally, if I got to decide right now what happens to the future of human civilization, I would definitely want to try and end suffering (insomuch as this is feasible), but I wouldn’t want to try and max out happiness, and certainly not via any kind of rats-on-heroin style approach. I would rather take the opposite tack, and construct a smaller number of god-like superhuman minds, who might not even be very “happy” in any of the usual senses (ie, perhaps they are meditating on the nature of existence with great equanimity), but who in some sense are able to like… maximize the potential of the universe to know itself and explore the possibilities of consciousness. Or something...
I don’t have time to reply to all of these, but I think it’s worth saying re point 1, that inasmuch as hedonism ‘struggles’ with this, it’s because it’s basically the only axiology to commit to addressing it at all. I don’t consider that a weakness, since there clearly is some level of comparability between my stubbing my toe and my watching a firework.
Preference utilitarianism sort of ducks around this by equivocating between whether determining a preference requires understanding the happiness its satisfaction brings (in which case it has the same problem) or whether preferences rely on some even more mysterious forces with even weirder implications. I wrote much more on this equivocation here.
Also re size specifically, he literally says size ‘is closely analogous to the sense in which (if welfare is aggregable at all) one population can have more welfare than another due to its size. It’s common to joke about ‘hedons’, but I see no reason one should both be materialist and not expect to find some minimum physical unit of happiness in conscious entities. Then the more hedons an entity has, the sizier its happiness would be. It’s also possible that that we find multiple indivisible hedon-like objects, in which case the philosophy gets harder again gets harder (and at the very least, it’s going to be tough to have an objective weighting between hedons and antihedons, since there’s no a priori reason to assume it should be 1-to-1). But I don’t think hedonists should have to assume the latter, or prove that it’s not true.
Can you point to specific cases of that happening? I haven’t seen this happen before. My sense is that most people who quote Rethinks moral weights project are familiar with the limitations.
Can you say more on this?
Rethink’s weights unhedged in the wild: the most recent time I remember seeing this was when somebody pointed me towards this website: https://foodimpacts.org/, which uses Rethink’s numbers to set the moral importance of different animals. They only link to where they got the weights in a tiny footnote on a secondary page about methods, and they don’t mention any other ways that people try to calculate reference weights, or anything about what it means to “assume hedonism” or etc. Instead, we’re told these weights are authoritative and scientific because they’re “based on the most elaborate research to date”.
IMO it would be cool to be able to swap between Rethink, versus squared neuron count or something, versus everything-is-100%. As is, they do let you edit the numbers yourself, and also give a checkbox that makes everything equal 100%. Which (perhaps unintentionally) is a pretty extreme framing of the discussion!! “Are shrimp 3% as important as a human life (30 shrimp = 1 person)! Or 100%? Or maybe you want to edit the numbers to something in-between?”
I think the foodimpacts calculator is a cool idea, and I don’t begrudge anyone an attempt to make estimates using a bunch of made-up numbers (see the ACX post on this subject) -- indeed, I wish the calculator went more out on a limb by trying to include the human health impacts of various foods (despite the difficulties / uncertainties they mention on the “methods” page). But this is the kind of thing that I was talking about re: the weights.
Animal welfare feeling more activist & less truth-seeking:
This post is specifically about vegan EA activists, and makes much stronger accusations of non-truthseeking-ness than I am making here against the broader animal welfare movement in general: https://forum.effectivealtruism.org/posts/qF4yhMMuavCFrLqfz/ea-vegan-advocacy-is-not-truthseeking-and-it-s-everyone-s
But I think that post is probably accurate in the specific claims that it makes, and indeed vegan EA activism is part of overall animal welfare EA activism, so perhaps I could rest my case there.
I also think that the broader animal welfare space has a much milder version of a similar ailment. I am pretty “rationalist” and think that rationalist virtues (as expounded in Yudkowsky’s Sequences, or Slate Star Codex blog posts, or Secular Solstice celebrations, or just sites like OurWorldInData) are important. I think that global health places like Givewell do a pretty great job embodying these virtues, that longtermist stuff does a medium-good job (they’re trying! but it’s harder since the whole space is just more speculative), and animal welfare does a worse job (but still better than almost all mainstream institutions, eg way better than either US political party). Mostly I think this is just because a lot of people get into animal EA without ever first reading rationalist blogs (which is fine, not everybody has to be just like me); instead they sometimes find EA via Peter Singer’s more activist-y “Animal Liberation”, or via the yet-more-activist mainstream vegan movement or climate movements. And in stuff like climate protest movements (greta thurnberg, just stop oil, sunrise, etc), being maximally truthseeking and evenhanded just isn’t a top priority like it is in EA! Of course the people that come to EA from those movements are often coming specifically because they recognize that, and they prefer EA’s more rigorous / rationalist vibe. (Kinda like how when Californians move to Texas, they actually make Texas more republican and not more democrat, because California is very blue but Californians-who-choose-to-move-to-Texas are red.) But I still think that (unlike the CA/TX example?) the long-time overlap with those other activist movements makes animal welfare less rationalist and thereby less truthseeking than I like.
(Just to further caveat… Not scoring 100⁄100 on truthseekingness isn’t the end of the world. I love the idea of Charter Cities and support that movement, despite that some charter city advocates are pretty hype-y and use exaggerated rhetoric, and a few, like Balajis, regularly misrepresent things and feel like outright hustlers at times. As I said, I’d support animal welfare over GHD despite truthseeky concerns if that was my only beef; my bigger worries are some philosophical disagreements and concern about the relative lack of long-term / ripple effects.)
<<My sense is that most people who quote Rethinks moral weights project is familiar with the limitations.>>
Do you think that the people doing the quoting also fairly put the average Forum reader on notice of the limitations? That’s a different thing than being aware of the limitations themselves. I’d have to go back and do a bunch of reading of past posts to have a firm sense on this.
I have yet to hear someone defend that. So far, everytime I have heard this idea, it was from a speciesist person who failed to understand the implication of rejecting speciesism. Basically just as a strawman argument.
David Mathers makes a similar comment, and I respond, here. Seems like there are multiple definitions of the word, and EA folks are using the narrower definition that’s preferred by smart philosophers. Wheras I had just picked up the word based on vibes, and assumed the definition by analogy to racism and sexism, which does indeed seem to be a common real-world usage of the term (eg, supported by top google results in dictionaries, wikipedia, etc). It’s unclear to me whether the original intended meaning of the word was closer to what modern smart philosophers prefer (and everybody else has been misinterpreting it since then), or closer to the definition preferred by activists and dictionaries (and it’s since been somewhat “sanewashed” by philosophers), or if (as I suspect ) it was mushy and unclear from the very start—invented by savvy people who maybe deliberately intended to link the two possible interpretations of the word.