And that goes in both directions: some find it intuitively unreasonable to think that humans can realise far more welfare than some other animals. Additionally, the undiluted experience model seems pretty intuitive to many.
I would hope that in a community committed to impartiality, one need not have to make the case for why it’s worth caring about the welfare of beings that happen not to be members of our species, so it is totally fine not to include that in your post.
I would hope that in a community committed to impartiality, one need not have to make the case for why it’s worth caring about the welfare of beings that happen not to be members of our species
I think EA’s cause prioritization would look very different if it genuinely were a “community committed to impartiality” regarding species. Under impartiality, both of these interventions are on the order of 1000x as cost-effective as GiveWell top charities.[1] (One could avoid this conclusion by believing pleasure/pain only account for on the order of 0.1% of welfare, but this is a deeply unusual view and is empirically dubious.[2]) Open Phil (OP) has recognized this since 2016.[3]
However, to this day, OP has only allocated 17% of its annual neartermist funding to animal welfare.[4] If OP really believes animal welfare is ~1000x as cost-effective as GiveWell top charities, it’s difficult to understand how this allocation of funding could possibly be morally justified. Yes, many caveats could be made:
OP could find moral weight estimation methodologically dubious. (This would be strange given that OP funded RP’s moral weights project.[5])
OP could oppose outsize allocations to interventions which depend upon controversial views, such as being impartial about species membership. (This would be strange given that in 2017, 2019, and 2021, OP allocated a majority of longtermist funding to AI x-risk reduction, even though the view that AI is an x-risk is similarly controversial.)
As remarked above, OP could hold a deeply unusual view where almost none of welfare is accounted for by pleasure/pain, which would be quite inconsistent with the existing evidence.
OP could believe animal welfare has faster diminishing marginal returns than global health. This is probably true, but if OP believes animal welfare has ~1000x cost-effectiveness, it seems that OP would be trying to hyperaggressively grow the capacity of animal welfare charities more than it’s currently doing.
Why doesn’t OP allocate a majority of neartermist funding to animal welfare? I don’t know. My guess is that key decisionmakers aren’t “committed to impartiality” regarding species. Holden Karnofsky has said as much: “My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern.”[6]
(Meme by me)
So, what to do? For one, it would be extremely helpful for OP to clarify their views on the questions relevant to animal welfare (how much of welfare is explained by hedonism, should one be impartial regarding species), what the cruxes are that would change their minds regarding cause prioritization, and the counterpoints which explain why they haven’t changed their minds. (I’ll be publishing a post within the next few months with the above arguments.)
I wish you were right that EA is a “community committed to impartiality” regarding species. However, empirically, it seems that’s not the case.
Severe pain, such as cluster headaches, is associated with a greatly increased suicidality. Lee et al (2019). “Increased suicidality in patients with cluster headache”. https://pubmed.ncbi.nlm.nih.gov/31018651/
Agreed. I’m planning on writing up a post about it, but I’m very busy and I’d like the post to be extremely rigorous and address all possible objections, so it probably won’t be published for a month or two.
And that goes in both directions: some find it intuitively unreasonable to think that humans can realise far more welfare than some other animals. Additionally, the undiluted experience model seems pretty intuitive to many.
I would hope that in a community committed to impartiality, one need not have to make the case for why it’s worth caring about the welfare of beings that happen not to be members of our species, so it is totally fine not to include that in your post.
I think EA’s cause prioritization would look very different if it genuinely were a “community committed to impartiality” regarding species. Under impartiality, both of these interventions are on the order of 1000x as cost-effective as GiveWell top charities.[1] (One could avoid this conclusion by believing pleasure/pain only account for on the order of 0.1% of welfare, but this is a deeply unusual view and is empirically dubious.[2]) Open Phil (OP) has recognized this since 2016.[3]
However, to this day, OP has only allocated 17% of its annual neartermist funding to animal welfare.[4] If OP really believes animal welfare is ~1000x as cost-effective as GiveWell top charities, it’s difficult to understand how this allocation of funding could possibly be morally justified. Yes, many caveats could be made:
OP could find moral weight estimation methodologically dubious. (This would be strange given that OP funded RP’s moral weights project.[5])
OP could oppose outsize allocations to interventions which depend upon controversial views, such as being impartial about species membership. (This would be strange given that in 2017, 2019, and 2021, OP allocated a majority of longtermist funding to AI x-risk reduction, even though the view that AI is an x-risk is similarly controversial.)
As remarked above, OP could hold a deeply unusual view where almost none of welfare is accounted for by pleasure/pain, which would be quite inconsistent with the existing evidence.
OP could believe animal welfare has faster diminishing marginal returns than global health. This is probably true, but if OP believes animal welfare has ~1000x cost-effectiveness, it seems that OP would be trying to hyperaggressively grow the capacity of animal welfare charities more than it’s currently doing.
Why doesn’t OP allocate a majority of neartermist funding to animal welfare? I don’t know. My guess is that key decisionmakers aren’t “committed to impartiality” regarding species. Holden Karnofsky has said as much: “My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern.”[6]
(Meme by me)
So, what to do? For one, it would be extremely helpful for OP to clarify their views on the questions relevant to animal welfare (how much of welfare is explained by hedonism, should one be impartial regarding species), what the cruxes are that would change their minds regarding cause prioritization, and the counterpoints which explain why they haven’t changed their minds. (I’ll be publishing a post within the next few months with the above arguments.)
I wish you were right that EA is a “community committed to impartiality” regarding species. However, empirically, it seems that’s not the case.
Vasco Grilo (2023). “Prioritising animal welfare over global health and development?” https://forum.effectivealtruism.org/posts/vBcT7i7AkNJ6u9BcQ/prioritising-animal-welfare-over-global-health-and
Severe pain, such as cluster headaches, is associated with a greatly increased suicidality. Lee et al (2019). “Increased suicidality in patients with cluster headache”. https://pubmed.ncbi.nlm.nih.gov/31018651/
“If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x).” Holden Karnofsky (2017). “Worldview Diversification”. https://www.openphilanthropy.org/research/worldview-diversification/
Ariel Simnegar (2023). “Open Phil Grants Analysis”. https://github.com/ariel-simnegar/open-phil-grants-analysis/blob/main/open_phil_grants_analysis.ipynb
Open Philanthropy. “Rethink Priorities — Moral Patienthood and Moral Weight Research”. https://www.openphilanthropy.org/grants/rethink-priorities-moral-patienthood-and-moral-weight-research/
Holden Karnofsky (2017). “Radical Empathy”. https://www.openphilanthropy.org/research/radical-empathy/
This is interesting ! I had the same interrogation, their position doesn’t seem that coherent.
Maybe you should to a post with this question, that would get noticed by someone at Open Phil and maybe get answered ?
Agreed. I’m planning on writing up a post about it, but I’m very busy and I’d like the post to be extremely rigorous and address all possible objections, so it probably won’t be published for a month or two.