Currently working for Mieux Donner. I do many stuff, but I mostly write content.
Background in cognitive science. I run a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.
Interested in cyborgism and AIS via debate.
https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4
[This does not represent the opinion of my employer]
I currently mostly write content for an Effective Giving Initiative, and I think it would be somewhat misleading to write that we recommend animal charities that defend animal rights -people would misconstrue what we’re talking about. Avoided suffering is what we think about when explaining who “made it” to the home page, it’s part of the methodology, and my estimates ultimately weigh in on that. It’s also the methodology of the evaluators who do all the hard work.
My guess would be that EA has a vast majority of consequentialists, whose success criterion is wellbeing, and whose methodology is [feasible because it is] welfare-focused (e.g. animal-adjusted QALYs per dollar spent). This probably sedimented itself early and people plausibly haven’t questioned it a lot so far. EA-aligned rights-focused interventions exist, but they’re ultimately measured according to their gains in terms of welfare.
On my side, I think it’s already hard as it is to select cost-effective charities with a consequentialist framework (and sell it to people!), and “rights” add in a lot of additional distinctions (e.g. rights as means vs as ends) which makes it hard to operationalize. I can write an article about why we recommend animal welfare charity X in terms of avoided counterfactual suffering, but I’m clueless if I had to recommend it in terms of avoided right infringement, because it’s harder to measure, and I’m not even sure of what I’m talking about.
I’d be happy to see people from other positions give their opinion, this is a strictly personal view.