[Epistemic status: unsure how much I believe each response but more pushing back against that “no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option.”]
I think worldview diversification can diversify to a worldview that is more anthropocentric and less scope sensitive across species/not purely utilitarian. This would directly change the split with farmed animal welfare.
There’s institutional and signalling value in showing that OpenPhil is willing to stand behind long commitments. This can in the worst instances be PR but in the best instances be a credible signal to many cause areas that OpenPhil is an actor in the non-profit space that will not change tact just due to philosophical changes in worldview (that seem hard to predict from the outside). For instance what if Korsgaard or Tarsney[1] just annihilates Utilitarianism with a treatise? I don’t think NGOs should have to track GPI’s outputs nor to know if they’ll be funded next year.
I think there’s something to be said for how one values “empirical evidence” over “philosophical evidence” even when the crux for animal welfare. Alexander Berger makes the argument here (I’m too lazy to fully type it out).
A moral parliaments view given uncertainty can lead to a lot of GiveWell looking much better. Even a Kantian sympathetic to animals like Korsgaard would have limitations towards certain welfarist approaches. For instance, I don’t know how a Kantian would weigh wild animal welfare or even shrimp welfare (would neuron weights express a being willing something?).
The animal welfare movement landscape is very activist driven such that a flood of cash on the order of magnitude of say the current $300MM given to GiveWell could lead to an activist form of dutch disease and be incredibly unhealthy for it.
OpenPhil could just have an asymmetric preference against downside risk such that it’s not a pure expected value calculation. I think there are good reasons to a-priori not invest in interventions that could carry downside risk and very plausible reasons why animal welfare interventions are more likely to entail those risks. For instance, political risks from advocacy and diet switches meaning more egg is consumed than beef. I think the largest funder in EA being risk averse is good given contemporary events.
OpenPhil seems really labour constrained in other cause areas as shown by the recent GCR hiring round such that maybe the due dilgence and labour costs for non-Givewell interventions are just not available to be investigated or executed.
- ^
I know Tarsney is a utilitarian but I’m just throwing him out there as a name that can change .
https://www.alexirpan.com/2024/08/06/switching-to-ai-safety.html
This reaffirms my belief it’s more important to look at the cruxes of existing ML researchers than internally within EAs on AI Safety.