I donât think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
âGoing all-in on animal welfare at the expense of global development seems bad for the movement.â â I donât think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isnât $100 million a mere fraction of the yearly GHD budget?
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person Iâve talked to about this stuff. However:
I donât have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesnât overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what Iâve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaintsâone is âanimal-welfare EA sometimes seems too âactivistâ in a non-truthseeking wayâ, and another is more like âI disagree with these folks about philosophical questionsâ. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other⌠whoâs tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus whoâs doing it because they disagree with me about something akin to âexperience sizeâ, or the importance of sapience, or how good of an approximation it is to linearly âadd upâ positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although itâs hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commentersâI wasnât primarily thinking âman, this animal-welfare stuff is gonna tank EAâs reputationâ, but rather âhey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but âweirdâ causes like AI safety and chicken wellbeing.â
I get that the question is only asking about $100m, which seems like it wouldnât shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, Iâm glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area⌠if it werenât for the pesky problem that itâs massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
Iâm not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if youâre trading off against global health, then youâre losing something strongly positive.
âCould you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?ââwell, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things: - Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years? - Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging âpro-animal-welfare, anti-GHDâ coalition who will return for next yearâs question, âShould we shift $500m from GHD to animal welfare?â, and the question the year after that... I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so Iâd be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I havenât thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/âtrillions of creatures, because at some point the experiences become pretty similar to each other and youâd be âdouble-countingâ.)
an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility
This seems like a pretty natural thing to believe, but Iâm not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that arenât themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but Iâm not sure how celebrated they are, though Iâm sure they deserve it.)
A few quick pushbacks/âquestions:
I donât think the perceived epistemic strength of the animal welfare folks in EA should have any bearing on this debate unless you think that nearly everyone running prominent organizations like Good Food Institute, Faunalytics, the Humane League, and others is not truth-seeking (i.e., animal welfare organizations are culturally not truth-seeking and consequently have shoddy interventions and goals).
To what extent do you think EA funding be allocated based on broader social perception? I think we should near-completely discount broader social perceptions in most cases.
The social perception point, which has been brought up by others, is confusing because animal welfare has broad social support. The public is negatively primed towards veganism but overwhelmingly positively so towards the general idea of not being unkind to (euphemism) farm animals.
âGoing all-in on animal welfare at the expense of global development seems bad for the movement.â â I donât think this is being debated here though. Could you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)? Isnât $100 million a mere fraction of the yearly GHD budget?
Yup, agreed that the arguments for animal welfare should be judged by their best proponents, and that probably the top EA animal-welfare organizations have much better views than the median random person Iâve talked to about this stuff. However:
I donât have a great sense of the space, though (for better or worse, I most enjoy learning about weird stuff like stable totalitarianism, charter cities, prediction markets, etc, which doesnât overlap much with animal welfare), so to some extent I am forced to just go off the vibes of what Iâve run into personally.
In my complaint about truthseekingness, I was kinda confusedly mashing together two distinct complaintsâone is âanimal-welfare EA sometimes seems too âactivistâ in a non-truthseeking wayâ, and another is more like âI disagree with these folks about philosophical questionsâ. That sounds really dumb since those are two very different complaints, but from the outside they can kinda shade into each other⌠whoâs tossing around wacky (IMO) welfare-range numbers because they just want an argument-as-soldier to use in favor of veganism, versus whoâs doing it because they disagree with me about something akin to âexperience sizeâ, or the importance of sapience, or how good of an approximation it is to linearly âadd upâ positive experiences when the experiences are near-identical[1]. Among those who disagree with me about those philosophical questions, who is really being a True Philosopher and following their reason wherever it leads (and just ended up in a different place than me), versus whose philosophical reasoning is a little biased by their activist commitments? (Of course one could also accuse me of being subconsciously biased in the opposite direction! Philosophy is hard...)
All that is to say, that I would probably consider the top EA animal-welfare orgs to be pretty truthseeking (although itâs hard for me to tell for sure from the outside), but I would probably still have important philosophical disagreements with them.
Maybe I am making a slightly different point as from most commentersâI wasnât primarily thinking âman, this animal-welfare stuff is gonna tank EAâs reputationâ, but rather âhey, an important side effect of global-health funding is that it buys us a lot of goodwill and mainstream legibility; it would be a shame to lose that if we converted all the global-health money to animal-welfare, or even if the EA movement just became primarily known for nothing but âweirdâ causes like AI safety and chicken wellbeing.â
I get that the question is only asking about $100m, which seems like it wouldnât shift the overall balance much. But see section 3 below.
To directly answer your question about social perception: I wish we could completely discount broader social perception when allocating funding (and indeed, Iâm glad that the EA movement can pull off as much disregarding-of-broader-social-perception as it already manages to do!), but I think in practice this is an important constraint that we should take seriously. Eg, personally I think that funding research into human intelligence augmentation (via iterated embryo selection or germline engineering) seems like it perhaps should a very high-priority cause area⌠if it werenât for the pesky problem that itâs massively taboo and would risk doing lots of damage to the rest of the EA movement. I also feel like there are a lot of explicitly political topics that might otherwise be worth some EA funding (for example, advocating Georgist land value taxes), but which would pose similar risk of politicizing the movement or whatever.
Iâm not sure whether the public would look positively or negatively on the EA farmed-animal-welfare movement. As you said, veganism seems to be percieved negatively and treating animals well seems to be percieved positively. Some political campaigns (eg for cage-free ballot propositions), admittedly designed to optimize positive perception, have passed with big margins. (But other movements, like for improving the lives of broiler chickens, have been less successful?) My impression is that the public would be pretty hostile to anything in the wild-animal-welfare space (which is a shame because I, a lover of weird niche EA stuff, am a big fan of wild animal welfare). Alternative proteins have become politicized enough that Florida was trying to ban cultured meat? It seems like a mixed bag overall; roughly neutral or maybe slightly negative, but definitely not like intelligence augmentation which is guaranteed-hugely-negative perception. But if youâre trading off against global health, then youâre losing something strongly positive.
âCould you elaborate on why you think if an additional $100 million were allocated to Animal Welfare, it would be at the expense of Global Health & Development (GHD)?ââwell, the question was about shifting $100m from animal welfare to GHD, so it does quite literally come at the expense (namely, a $100m expense) of GHD! As for whether this is a big shift or a tiny drop in the bucket, depends on a couple things:
- Does this hypothetical $100m get spent all at once, and then we hold another vote next year? Or do we spend like $5m per year over the next 20 years?
- Is this the one-and-only final vote on redistributing the EA portfolio? Or maybe there is an emerging âpro-animal-welfare, anti-GHDâ coalition who will return for next yearâs question, âShould we shift $500m from GHD to animal welfare?â, and the question the year after that...
I would probably endorse a moderate shift of funding, but not an extreme one that left GHD hollowed out. Based on this chart from 2020 (idk what the situation looks like now in 2024), taking $100m per year from GHD would probably be pretty devastating to GHD, and AW might not even have the capacity to absorb the flood of money. But moving $10m each year over 10 years would be a big boost to AW without changing the overall portfolio hugely, so Iâd be more amenable to it.
(ie, are two identical computer simulations of a suffering emulated mind, any worse than one simulation? what about a single simulation on a computer with double-thick wires? what about a simulation identical in every respect except one? I havenât thought super hard about this, but I feel like these questions might have important real-world consequences for simple creatures like blackflies or shrimp, whose experiences might not add linearly across billions/âtrillions of creatures, because at some point the experiences become pretty similar to each other and youâd be âdouble-countingâ.)
This seems like a pretty natural thing to believe, but Iâm not sure I hear coverage of EA talk about the global health work a lot. Are you sure it happens?
(One interesting aspect of this is that I get the impression EA GH work is often not explicitly tied to EA, or is about supporting existing organisations that arenât themselves explicitly EA. The charities incubated by Charity Entrepeneurship are perhaps an exception, but Iâm not sure how celebrated they are, though Iâm sure they deserve it.)