Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I’ve talked to about this stuff. However:
I don’t have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn’t overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I’ve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints—one is “animal-welfare EA sometimes seems too ‘activist’ in a non-truthseeking way”, and another is more like “I disagree with these folks about philosophical questions”. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other… who’s tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who’s doing it because they disagree with me about something akin to “experience size”, or the importance of sapience, or how good of an approximation it is to linearly “add up” positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it’s hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commenters—I wasn’t primarily thinking “man, this animal-welfare stuff is gonna tank EA’s reputation”, but rather “hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but ‘weird’ causes like AI safety and chicken wellbeing.”
I get that the question is only asking about $100m, which seems like it wouldn’t shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I’m glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area… if it weren’t for the pesky problem that it’s massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
I’m not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if you’re trading off against global health, then you’re losing something strongly positive.
“Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?”—well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things: - Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years? - Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging “pro-animal-welfare, anti-GHD” coalition who will return for next year’s question, “Should we shift $500m from GHD to animal welfare?”, and the question the year after that... I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I’d be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.)
an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility
This seems like a pretty natural thing to believe, but I’m not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that aren’t themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but I’m not sure how celebrated they are, though I’m sure they deserve it.)
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person I’ve talked to about this stuff. However:
I don’t have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesn’t overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what I’ve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaints—one is “animal-welfare EA sometimes seems too ‘activist’ in a non-truthseeking way”, and another is more like “I disagree with these folks about philosophical questions”. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other… who’s tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus who’s doing it because they disagree with me about something akin to “experience size”, or the importance of sapience, or how good of an approximation it is to linearly “add up” positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although it’s hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commenters—I wasn’t primarily thinking “man, this animal-welfare stuff is gonna tank EA’s reputation”, but rather “hey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but ‘weird’ causes like AI safety and chicken wellbeing.”
I get that the question is only asking about $100m, which seems like it wouldn’t shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, I’m glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area… if it weren’t for the pesky problem that it’s massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
I’m not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if you’re trading off against global health, then you’re losing something strongly positive.
“Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?”—well, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things:
- Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years?
- Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging “pro-animal-welfare, anti-GHD” coalition who will return for next year’s question, “Should we shift $500m from GHD to animal welfare?”, and the question the year after that...
I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so I’d be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I haven’t thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/trillions of creatures, because at some point the experiences become pretty similar to each other and you’d be “double-counting”.)
This seems like a pretty natural thing to believe, but I’m not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that aren’t themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but I’m not sure how celebrated they are, though I’m sure they deserve it.)