Given the extensive and ongoing concerns about ACE research that have been raised by Harrison Nathan and myself, I am very surprised to see ACE researchers being selected to the EA Animal Fund. This suggests a lack of concern for accountability in the procedure for selecting people to the funds. What is the explanation for this?
I’m also curious for more info about how teams were selected.
Thanks for the feedback and Q’s Halstead. I made the decision to include two ACE researchers—ACE decided which two staff to include, while I personally picked Natalie. I chose to include ACE because (a) while I’ve had concerns re their past research, I think their work at identifying giving opportunities has been very good, (b) several EA donors told me it would increase their faith in the fund—and its value to them—to have ACE’s expertise and viewpoint represented, and (c) I’ve been personally impressed by all ACE researchers i’ve met, especially re their intelligence, open-mindedness, and EA values alignment. I thought some of your (Halstead’s) critiques of ACE were valid, but I don’t view them as especially relevant to Toni and Jamie’s ability to make outstanding giving recommendations via the fund.
I’m traveling in Asia so will be slow replying, but will try to ultimately reply to all messages here re the animal welfare fund (if only once i’m back in the US next week). Thanks for engaging with this!
Hi Lewis, thanks for this.
Is your view that they might happen to arrive at decent recommendations, or that the research method they use to arrive at those recommendations is good? I think the first is perhaps true but definitely not the second, and this should be sufficient disqualification. I’m loath to have to go over this again, but unfortunately it is necessary given this decision.
ACE have been around for six years and as of today have only two intervention reports on their website which they actually stand by—on leafleting and on protests. (The leafleting report shows that leafleting doesn’t work.) They kept several long intervention reports on their website for years until I published my critique that were, by their own admission, poor. They only took their old leafleting report down around a year after Harrison Nathan pointed out how bad it was. They kept their grossly inaccurate ‘impact calculator’ on their website for a year after Nathan published his critique. Until only last year, their cost-effectiveness analyses contained various absurd figures such as that the digital reach of their charities was in the billions. ACE does not even try to check whether charities they assess played any role in claimed successful corporate campaigns, and until I published my critique, relied on a paper on the welfare effects of hen systems which Open Phil explained to be mistaken more than a year ago. They don’t favour meat alternative research over charities doing corporate campaigns and the like because counting long-term effects would be “unfair” to the latter.
Which piece of their research do you think is good, aside from the recent reports on leafleting and protests, and do you not think this is an adequate outcome after six years of operation?
Their response to criticism in both my case and in Harrison Nathan’s has been to suggest that critics have ‘misunderstood’ their research and have presented their responses as opportunities for clarification. In fact, what we both pointed out was that there were and are extensive flaws with their research. This is not genuine accountability and makes me seriously concerned that they will not actually improve. Again, I didn’t want to have to express my true views on this, and I thought I wouldn’t have to as they would be left alone with time to improve rather than being given control over millions of dollars by CEA.
For these reasons, I don’t see how my critiques could not be highly relevant to whether they should be involved in the fund. Do you think the consistent publication of low quality research over the course of years is irrelevant to the ability to do research in the future? Or do you think that their research has actually been better than I have suggested? If so, I would be interested to which parts you think are indeed better.
Thanks for your feedback and questions, and thanks for your patience while I was traveling. On reflection, I think I made a mistake in delegating two seats on the Fund to ACE, rather than picking Toni and Jamie independently. My intention was to increase the Fund’s ideological diversity (ACE researchers have a range of viewpoints, and I wanted to avoid the natural bias to pick those who shared mine). But I now think this benefit is outweighed by the harm that the Fund could be misperceived as reflecting ACE’s organizational views or being based on ACE research.
Otherwise, I worry we’re talking past each other. I agree with several, though not all, of your criticisms of ACE’s historical performance. But I also think ACE’s charity recommendations have created substantial value by driving donations toward higher-impact activities (though I don’t always agree with them). I believe this more because of my independent view of the activities and groups involved than because of ACE’s public writing.
More importantly, I don’t think your criticisms of ACE reflect on Toni and Jamie’s ability to help the Fund accomplish the goals we established: a wider range of views, a deeper resource of time, and more capacity to monitor impact. Both are smart, have different ideas on how to most effectively fund animal groups within an EA framework, and have much more time than I do to identify new giving opportunities. And both have an open-mindedness and commitment to truth that I think is critical for objectively assessing impact.
Thanks again for engaging with this decision, and the Fund, so thoughtfully. We look forward to sharing updates on the Fund’s donations in the coming months. And thank you, as always, to everyone for your support of effective animal advocacy — whether via the Fund or directly.
Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn’t it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.
Thanks for your feedback and question Dunja, and thanks for your patience while I was traveling. I agree that the Fund benefits from having a diverse team, but disagree that criticism of ACE is the right kind of ideological diversity. Both Toni and Jamie bring quite different perspectives on how to most cost-effectively help animals within an EA framework (see, for instance, the charities they’re excited about here). The Fund won’t be funding ACE now they’re onboard, and my guess is that we’ll continue to mostly fund smaller unique opportunities, rather than ACE top or standout charities. So I don’t think people’s views on ACE will be especially relevant to our giving picks here. I see less value to bringing in critics of EA, as many (though not all) of ACE’s critics are, as we’d have trouble reaching a consensus on funding decisions. Instead, I encourage those who are skeptical of EA views or the groups we fund to donate directly to effective animal groups they prefer.
Who would you have recommended for these spots?
My not-that-informed view is something like “there are a bunch of problems with ACE, but I’m not sure there’s anyone better right now”. But if you have people in mind who would have been better for this role that would be really helpful to know!
I would have asked Harrison Nathan, as he has done some high quality research on the area, and really knows what is going on (though maybe he wouldn’t have agreed). Aside from that, I’m not all that familiar with which other researchers there are, but there must be other viable options, and I think having a two person committee of Natalie and Lewis only would have been strongly preferable.
I think ACE researchers might well recommend some good stuff, but I’m troubled by the principle at play here. It suggests that documented past performance is irrelevant to whether the community allows you to make important decisions about millions of dollars. Imagine how this would look to non-EAs: it takes an outsider to review and criticise ACE’s poor previous research, which still contains extensive and serious flaws today. The community then responds by giving ACE researchers control over a multi-million dollar fund. The incentives here are perverse, to say the least.
As I said in my post, I hope their research will improve in the future, but this is a hope not a guarantee and certainly does not justify the trust signalled by putting them in charge of millions of dollars.